Jan 30 13:01:39.011206 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:01:39.011228 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 13:01:39.011238 kernel: KASLR enabled Jan 30 13:01:39.011244 kernel: efi: EFI v2.7 by EDK II Jan 30 13:01:39.011250 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 30 13:01:39.011256 kernel: random: crng init done Jan 30 13:01:39.011263 kernel: ACPI: Early table checksum verification disabled Jan 30 13:01:39.011270 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 30 13:01:39.011276 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:01:39.011284 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011290 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011296 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011303 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011309 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011317 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011325 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011332 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011339 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:01:39.011345 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 13:01:39.011352 kernel: NUMA: Failed to initialise from firmware Jan 30 13:01:39.011365 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:01:39.011372 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 13:01:39.011378 kernel: Zone ranges: Jan 30 13:01:39.011385 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:01:39.011392 kernel: DMA32 empty Jan 30 13:01:39.011400 kernel: Normal empty Jan 30 13:01:39.011407 kernel: Movable zone start for each node Jan 30 13:01:39.011414 kernel: Early memory node ranges Jan 30 13:01:39.011420 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 30 13:01:39.011427 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 13:01:39.011434 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 13:01:39.011440 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 13:01:39.011447 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 13:01:39.011454 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 13:01:39.011460 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 13:01:39.011467 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:01:39.011474 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 13:01:39.011482 kernel: psci: probing for conduit method from ACPI. Jan 30 13:01:39.011488 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:01:39.011495 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:01:39.011505 kernel: psci: Trusted OS migration not required Jan 30 13:01:39.011513 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:01:39.011520 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:01:39.011529 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:01:39.011537 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:01:39.011544 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 13:01:39.011551 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:01:39.011558 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:01:39.011565 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:01:39.011572 kernel: CPU features: detected: Spectre-v4 Jan 30 13:01:39.011579 kernel: CPU features: detected: Spectre-BHB Jan 30 13:01:39.011586 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:01:39.011601 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:01:39.011610 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:01:39.011623 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:01:39.011630 kernel: alternatives: applying boot alternatives Jan 30 13:01:39.011638 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 13:01:39.011646 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:01:39.011653 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:01:39.011660 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:01:39.011674 kernel: Fallback order for Node 0: 0 Jan 30 13:01:39.011682 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 13:01:39.011689 kernel: Policy zone: DMA Jan 30 13:01:39.011696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:01:39.011705 kernel: software IO TLB: area num 4. Jan 30 13:01:39.011712 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 13:01:39.011813 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 30 13:01:39.011822 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:01:39.011830 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:01:39.011837 kernel: rcu: RCU event tracing is enabled. Jan 30 13:01:39.011845 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:01:39.011852 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:01:39.011859 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:01:39.011866 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:01:39.011873 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:01:39.011881 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:01:39.011891 kernel: GICv3: 256 SPIs implemented Jan 30 13:01:39.011898 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:01:39.011905 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:01:39.011913 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:01:39.011920 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:01:39.011927 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:01:39.011934 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:01:39.011941 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:01:39.011948 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 13:01:39.011955 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 13:01:39.011963 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:01:39.011972 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:01:39.011979 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:01:39.011986 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:01:39.011994 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:01:39.012000 kernel: arm-pv: using stolen time PV Jan 30 13:01:39.012008 kernel: Console: colour dummy device 80x25 Jan 30 13:01:39.012015 kernel: ACPI: Core revision 20230628 Jan 30 13:01:39.012022 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:01:39.012030 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:01:39.012037 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:01:39.012046 kernel: landlock: Up and running. Jan 30 13:01:39.012053 kernel: SELinux: Initializing. Jan 30 13:01:39.012060 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:01:39.012067 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:01:39.012075 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:01:39.012082 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:01:39.012089 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:01:39.012097 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:01:39.012104 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:01:39.012113 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:01:39.012120 kernel: Remapping and enabling EFI services. Jan 30 13:01:39.012128 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:01:39.012135 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:01:39.012142 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:01:39.012150 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 13:01:39.012157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:01:39.012165 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:01:39.012172 kernel: Detected PIPT I-cache on CPU2 Jan 30 13:01:39.012180 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 13:01:39.012189 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 13:01:39.012196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:01:39.012208 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 13:01:39.012218 kernel: Detected PIPT I-cache on CPU3 Jan 30 13:01:39.012225 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 13:01:39.012233 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 13:01:39.012240 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:01:39.012248 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 13:01:39.012256 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:01:39.012264 kernel: SMP: Total of 4 processors activated. Jan 30 13:01:39.012272 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:01:39.012280 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:01:39.012288 kernel: CPU features: detected: Common not Private translations Jan 30 13:01:39.012295 kernel: CPU features: detected: CRC32 instructions Jan 30 13:01:39.012303 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:01:39.012311 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:01:39.012318 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:01:39.012327 kernel: CPU features: detected: Privileged Access Never Jan 30 13:01:39.012335 kernel: CPU features: detected: RAS Extension Support Jan 30 13:01:39.012343 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:01:39.012350 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:01:39.012358 kernel: alternatives: applying system-wide alternatives Jan 30 13:01:39.012365 kernel: devtmpfs: initialized Jan 30 13:01:39.012373 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:01:39.012381 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:01:39.012389 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:01:39.012398 kernel: SMBIOS 3.0.0 present. Jan 30 13:01:39.012406 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 30 13:01:39.012413 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:01:39.012421 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:01:39.012429 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:01:39.012436 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:01:39.012487 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:01:39.012495 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 30 13:01:39.012503 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:01:39.012514 kernel: cpuidle: using governor menu Jan 30 13:01:39.012522 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:01:39.012530 kernel: ASID allocator initialised with 32768 entries Jan 30 13:01:39.012570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:01:39.012579 kernel: Serial: AMBA PL011 UART driver Jan 30 13:01:39.012587 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:01:39.012595 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:01:39.012603 kernel: Modules: 509040 pages in range for PLT usage Jan 30 13:01:39.012615 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:01:39.012656 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:01:39.012665 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:01:39.012685 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:01:39.012693 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:01:39.012700 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:01:39.012708 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:01:39.012749 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:01:39.012757 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:01:39.012765 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:01:39.012777 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:01:39.012785 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:01:39.012792 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:01:39.012869 kernel: ACPI: Interpreter enabled Jan 30 13:01:39.012878 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:01:39.012915 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:01:39.012927 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:01:39.012935 kernel: printk: console [ttyAMA0] enabled Jan 30 13:01:39.012943 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:01:39.013126 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:01:39.013211 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:01:39.013314 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:01:39.013384 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:01:39.013454 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:01:39.013464 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:01:39.013472 kernel: PCI host bridge to bus 0000:00 Jan 30 13:01:39.013553 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:01:39.013625 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:01:39.013715 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:01:39.013780 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:01:39.013867 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:01:39.013950 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:01:39.014032 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 13:01:39.014114 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 13:01:39.014186 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:01:39.014259 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:01:39.014329 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 13:01:39.014402 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 13:01:39.014466 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:01:39.014531 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:01:39.014594 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:01:39.014604 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:01:39.014618 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:01:39.014627 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:01:39.014635 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:01:39.014643 kernel: iommu: Default domain type: Translated Jan 30 13:01:39.014651 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:01:39.014661 kernel: efivars: Registered efivars operations Jan 30 13:01:39.014703 kernel: vgaarb: loaded Jan 30 13:01:39.014714 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:01:39.014722 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:01:39.014730 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:01:39.014738 kernel: pnp: PnP ACPI init Jan 30 13:01:39.014832 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:01:39.014844 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:01:39.014851 kernel: NET: Registered PF_INET protocol family Jan 30 13:01:39.014863 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:01:39.014871 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:01:39.014879 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:01:39.014887 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:01:39.014895 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:01:39.014902 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:01:39.014910 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:01:39.014918 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:01:39.014926 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:01:39.014936 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:01:39.014943 kernel: kvm [1]: HYP mode not available Jan 30 13:01:39.014951 kernel: Initialise system trusted keyrings Jan 30 13:01:39.014959 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:01:39.014967 kernel: Key type asymmetric registered Jan 30 13:01:39.014974 kernel: Asymmetric key parser 'x509' registered Jan 30 13:01:39.014982 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:01:39.014990 kernel: io scheduler mq-deadline registered Jan 30 13:01:39.014998 kernel: io scheduler kyber registered Jan 30 13:01:39.015007 kernel: io scheduler bfq registered Jan 30 13:01:39.015015 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:01:39.015023 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:01:39.015031 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:01:39.015106 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 13:01:39.015117 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:01:39.015125 kernel: thunder_xcv, ver 1.0 Jan 30 13:01:39.015132 kernel: thunder_bgx, ver 1.0 Jan 30 13:01:39.015140 kernel: nicpf, ver 1.0 Jan 30 13:01:39.015150 kernel: nicvf, ver 1.0 Jan 30 13:01:39.015226 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:01:39.015293 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:01:38 UTC (1738242098) Jan 30 13:01:39.015304 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:01:39.015312 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:01:39.015320 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:01:39.015327 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:01:39.015335 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:01:39.015345 kernel: Segment Routing with IPv6 Jan 30 13:01:39.015353 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:01:39.015360 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:01:39.015368 kernel: Key type dns_resolver registered Jan 30 13:01:39.015376 kernel: registered taskstats version 1 Jan 30 13:01:39.015384 kernel: Loading compiled-in X.509 certificates Jan 30 13:01:39.015391 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 13:01:39.015399 kernel: Key type .fscrypt registered Jan 30 13:01:39.015406 kernel: Key type fscrypt-provisioning registered Jan 30 13:01:39.015416 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:01:39.015424 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:01:39.015431 kernel: ima: No architecture policies found Jan 30 13:01:39.015439 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:01:39.015446 kernel: clk: Disabling unused clocks Jan 30 13:01:39.015454 kernel: Freeing unused kernel memory: 39360K Jan 30 13:01:39.015461 kernel: Run /init as init process Jan 30 13:01:39.015469 kernel: with arguments: Jan 30 13:01:39.015477 kernel: /init Jan 30 13:01:39.015486 kernel: with environment: Jan 30 13:01:39.015494 kernel: HOME=/ Jan 30 13:01:39.015501 kernel: TERM=linux Jan 30 13:01:39.015509 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:01:39.015519 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:01:39.015528 systemd[1]: Detected virtualization kvm. Jan 30 13:01:39.015537 systemd[1]: Detected architecture arm64. Jan 30 13:01:39.015546 systemd[1]: Running in initrd. Jan 30 13:01:39.015554 systemd[1]: No hostname configured, using default hostname. Jan 30 13:01:39.015562 systemd[1]: Hostname set to <localhost>. Jan 30 13:01:39.015571 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:01:39.015579 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:01:39.015587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:01:39.015595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:01:39.015604 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:01:39.015621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:01:39.015630 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:01:39.015638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:01:39.015648 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:01:39.015657 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:01:39.015666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:01:39.015755 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:01:39.015766 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:01:39.015775 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:01:39.015783 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:01:39.015791 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:01:39.015800 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:01:39.015808 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:01:39.015816 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:01:39.015824 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:01:39.015832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:01:39.015842 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:01:39.015851 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:01:39.015859 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:01:39.015867 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:01:39.015875 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:01:39.015883 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:01:39.015892 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:01:39.015900 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:01:39.015910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:01:39.015918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:01:39.015926 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:01:39.015935 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:01:39.015943 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:01:39.015952 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:01:39.015963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:01:39.015972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:01:39.015980 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:01:39.015989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:01:39.016022 systemd-journald[237]: Collecting audit messages is disabled. Jan 30 13:01:39.016045 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:01:39.016053 kernel: Bridge firewalling registered Jan 30 13:01:39.016062 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:01:39.016071 systemd-journald[237]: Journal started Jan 30 13:01:39.016093 systemd-journald[237]: Runtime Journal (/run/log/journal/f9d45b18a2964dc7b6d17aeee0eb157f) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:01:38.982221 systemd-modules-load[238]: Inserted module 'overlay' Jan 30 13:01:39.013954 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 30 13:01:39.020600 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:01:39.021942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:01:39.025997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:01:39.041850 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:01:39.047069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:01:39.049150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:01:39.054370 dracut-cmdline[269]: dracut-dracut-053 Jan 30 13:01:39.058835 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:01:39.061267 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 13:01:39.063554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:01:39.071912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:01:39.107503 systemd-resolved[297]: Positive Trust Anchors: Jan 30 13:01:39.107522 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:01:39.107554 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:01:39.115813 systemd-resolved[297]: Defaulting to hostname 'linux'. Jan 30 13:01:39.116983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:01:39.118715 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:01:39.153716 kernel: SCSI subsystem initialized Jan 30 13:01:39.160689 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:01:39.169706 kernel: iscsi: registered transport (tcp) Jan 30 13:01:39.184754 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:01:39.184775 kernel: QLogic iSCSI HBA Driver Jan 30 13:01:39.227763 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:01:39.236851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:01:39.254102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:01:39.254181 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:01:39.254193 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:01:39.311664 kernel: raid6: neonx8 gen() 14298 MB/s Jan 30 13:01:39.328711 kernel: raid6: neonx4 gen() 12881 MB/s Jan 30 13:01:39.347855 kernel: raid6: neonx2 gen() 12404 MB/s Jan 30 13:01:39.364820 kernel: raid6: neonx1 gen() 10098 MB/s Jan 30 13:01:39.381723 kernel: raid6: int64x8 gen() 6720 MB/s Jan 30 13:01:39.400325 kernel: raid6: int64x4 gen() 6433 MB/s Jan 30 13:01:39.416731 kernel: raid6: int64x2 gen() 6127 MB/s Jan 30 13:01:39.433962 kernel: raid6: int64x1 gen() 5028 MB/s Jan 30 13:01:39.434036 kernel: raid6: using algorithm neonx8 gen() 14298 MB/s Jan 30 13:01:39.451894 kernel: raid6: .... xor() 11883 MB/s, rmw enabled Jan 30 13:01:39.451962 kernel: raid6: using neon recovery algorithm Jan 30 13:01:39.458545 kernel: xor: measuring software checksum speed Jan 30 13:01:39.459895 kernel: 8regs : 1735 MB/sec Jan 30 13:01:39.459930 kernel: 32regs : 19646 MB/sec Jan 30 13:01:39.461197 kernel: arm64_neon : 26936 MB/sec Jan 30 13:01:39.461225 kernel: xor: using function: arm64_neon (26936 MB/sec) Jan 30 13:01:39.512712 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:01:39.524105 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:01:39.534906 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:01:39.547982 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jan 30 13:01:39.551212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:01:39.559876 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:01:39.572637 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jan 30 13:01:39.605711 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:01:39.625942 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:01:39.667352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:01:39.676882 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:01:39.689280 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:01:39.691354 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:01:39.692875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:01:39.696132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:01:39.704892 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:01:39.718185 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:01:39.730561 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 13:01:39.744698 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:01:39.744809 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:01:39.744828 kernel: GPT:9289727 != 19775487 Jan 30 13:01:39.744838 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:01:39.744847 kernel: GPT:9289727 != 19775487 Jan 30 13:01:39.744857 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:01:39.744867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:01:39.735104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:01:39.735227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:01:39.737234 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:01:39.743136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:01:39.743309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:01:39.751257 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:01:39.764073 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:01:39.772445 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (507) Jan 30 13:01:39.772469 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Jan 30 13:01:39.781125 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:01:39.782741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:01:39.791586 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:01:39.798563 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:01:39.799990 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:01:39.805923 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:01:39.817868 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:01:39.819746 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:01:39.825045 disk-uuid[552]: Primary Header is updated. Jan 30 13:01:39.825045 disk-uuid[552]: Secondary Entries is updated. Jan 30 13:01:39.825045 disk-uuid[552]: Secondary Header is updated. Jan 30 13:01:39.828367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:01:39.844693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:01:39.847965 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:01:40.845498 disk-uuid[553]: The operation has completed successfully. Jan 30 13:01:40.846730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:01:40.866757 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:01:40.866858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:01:40.887946 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:01:40.895105 sh[574]: Success Jan 30 13:01:40.927100 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:01:40.982281 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:01:40.984306 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:01:40.986019 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:01:40.997047 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 13:01:40.997107 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:01:40.997127 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:01:40.998151 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:01:40.998921 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:01:41.004395 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:01:41.005865 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:01:41.020845 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:01:41.022455 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:01:41.030958 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:01:41.031017 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:01:41.031028 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:01:41.038730 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:01:41.049055 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:01:41.050243 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:01:41.056386 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:01:41.064875 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:01:41.137855 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:01:41.147845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:01:41.169943 ignition[675]: Ignition 2.19.0 Jan 30 13:01:41.169953 ignition[675]: Stage: fetch-offline Jan 30 13:01:41.169991 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:01:41.169999 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:01:41.170226 ignition[675]: parsed url from cmdline: "" Jan 30 13:01:41.170230 ignition[675]: no config URL provided Jan 30 13:01:41.170234 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:01:41.170241 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:01:41.170265 ignition[675]: op(1): [started] loading QEMU firmware config module Jan 30 13:01:41.170269 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:01:41.179493 systemd-networkd[764]: lo: Link UP Jan 30 13:01:41.179504 systemd-networkd[764]: lo: Gained carrier Jan 30 13:01:41.180234 systemd-networkd[764]: Enumeration completed Jan 30 13:01:41.180533 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:01:41.183332 ignition[675]: op(1): [finished] loading QEMU firmware config module Jan 30 13:01:41.180766 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:01:41.180769 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:01:41.181755 systemd-networkd[764]: eth0: Link UP Jan 30 13:01:41.181758 systemd-networkd[764]: eth0: Gained carrier Jan 30 13:01:41.181765 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:01:41.182561 systemd[1]: Reached target network.target - Network. Jan 30 13:01:41.200727 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:01:41.231268 ignition[675]: parsing config with SHA512: 8711c0950cc84a96a37010c6da4f786d3220d9b0d95f59a090e7786bc82e8961dd37afbe505a23806849a94d4fd338e1923a9d651349c59c053037c54b88af8a Jan 30 13:01:41.235714 unknown[675]: fetched base config from "system" Jan 30 13:01:41.235723 unknown[675]: fetched user config from "qemu" Jan 30 13:01:41.236115 ignition[675]: fetch-offline: fetch-offline passed Jan 30 13:01:41.236174 ignition[675]: Ignition finished successfully Jan 30 13:01:41.239705 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:01:41.241689 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:01:41.250893 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:01:41.261358 ignition[771]: Ignition 2.19.0 Jan 30 13:01:41.261369 ignition[771]: Stage: kargs Jan 30 13:01:41.261545 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:01:41.261554 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:01:41.264537 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:01:41.262429 ignition[771]: kargs: kargs passed Jan 30 13:01:41.262477 ignition[771]: Ignition finished successfully Jan 30 13:01:41.273906 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:01:41.284360 ignition[780]: Ignition 2.19.0 Jan 30 13:01:41.284371 ignition[780]: Stage: disks Jan 30 13:01:41.284562 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:01:41.284571 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:01:41.285536 ignition[780]: disks: disks passed Jan 30 13:01:41.288137 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:01:41.285591 ignition[780]: Ignition finished successfully Jan 30 13:01:41.289877 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:01:41.291710 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:01:41.296086 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:01:41.297332 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:01:41.299306 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:01:41.308828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:01:41.323400 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:01:41.327952 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:01:41.340797 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:01:41.382680 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 13:01:41.383258 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:01:41.385268 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:01:41.404842 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:01:41.406827 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:01:41.409281 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:01:41.409337 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:01:41.409361 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:01:41.417793 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Jan 30 13:01:41.414229 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:01:41.422739 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:01:41.422763 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:01:41.422774 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:01:41.417384 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:01:41.425698 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:01:41.426421 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:01:41.458104 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:01:41.463057 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:01:41.467752 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:01:41.472300 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:01:41.552982 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:01:41.561899 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:01:41.564421 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:01:41.569688 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:01:41.588251 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:01:41.590191 ignition[912]: INFO : Ignition 2.19.0 Jan 30 13:01:41.590191 ignition[912]: INFO : Stage: mount Jan 30 13:01:41.590191 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:01:41.590191 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:01:41.595589 ignition[912]: INFO : mount: mount passed Jan 30 13:01:41.595589 ignition[912]: INFO : Ignition finished successfully Jan 30 13:01:41.592402 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:01:41.601803 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:01:41.995920 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:01:42.006899 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:01:42.014507 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jan 30 13:01:42.014568 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:01:42.014580 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:01:42.015491 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:01:42.023687 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:01:42.024459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:01:42.049996 ignition[944]: INFO : Ignition 2.19.0 Jan 30 13:01:42.049996 ignition[944]: INFO : Stage: files Jan 30 13:01:42.051693 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:01:42.051693 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:01:42.053974 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:01:42.053974 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:01:42.053974 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:01:42.058339 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:01:42.058339 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:01:42.060949 unknown[944]: wrote ssh authorized keys file for user: core Jan 30 13:01:42.062082 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:01:42.063372 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:01:42.063372 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:01:42.124029 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:01:42.229004 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:01:42.229004 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:01:42.233379 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 13:01:42.496943 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:01:42.715388 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:01:42.715388 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:01:42.719176 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:01:42.719176 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:01:42.719176 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:01:42.719176 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:01:42.719176 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:01:42.719176 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:01:42.719176 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:01:42.719176 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:01:42.742776 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:01:42.747725 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:01:42.747725 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:01:42.747725 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:01:42.747725 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:01:42.747725 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:01:42.747725 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:01:42.747725 ignition[944]: INFO : files: files passed Jan 30 13:01:42.747725 ignition[944]: INFO : Ignition finished successfully Jan 30 13:01:42.749775 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:01:42.764878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:01:42.766914 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:01:42.770449 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:01:42.771480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:01:42.776209 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:01:42.780067 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:01:42.780067 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:01:42.783322 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:01:42.784744 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:01:42.786269 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:01:42.794873 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:01:42.819179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:01:42.819289 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:01:42.821764 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:01:42.823725 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:01:42.825686 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:01:42.832866 systemd-networkd[764]: eth0: Gained IPv6LL Jan 30 13:01:42.836859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:01:42.850771 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:01:42.860855 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:01:42.869002 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:01:42.870255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:01:42.872349 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:01:42.874168 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:01:42.874298 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:01:42.876823 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:01:42.879623 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:01:42.880855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:01:42.883220 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:01:42.885469 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:01:42.887556 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:01:42.889438 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:01:42.891517 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:01:42.893628 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:01:42.895602 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:01:42.897242 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:01:42.897379 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:01:42.899770 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:01:42.901808 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:01:42.903885 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:01:42.904802 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:01:42.906175 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:01:42.906390 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:01:42.909436 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:01:42.909558 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:01:42.911555 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:01:42.914566 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:01:42.914705 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:01:42.917108 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:01:42.918628 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:01:42.920221 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:01:42.920313 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:01:42.922033 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:01:42.922118 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:01:42.924653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:01:42.924793 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:01:42.926531 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:01:42.926644 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:01:42.939942 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:01:42.941732 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:01:42.943325 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:01:42.943465 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:01:42.945539 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:01:42.945656 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:01:42.953580 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:01:42.953722 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:01:42.957544 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:01:42.963518 ignition[1000]: INFO : Ignition 2.19.0 Jan 30 13:01:42.963518 ignition[1000]: INFO : Stage: umount Jan 30 13:01:42.963518 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:01:42.963518 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:01:42.968809 ignition[1000]: INFO : umount: umount passed Jan 30 13:01:42.968809 ignition[1000]: INFO : Ignition finished successfully Jan 30 13:01:42.966292 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:01:42.966400 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:01:42.967751 systemd[1]: Stopped target network.target - Network. Jan 30 13:01:42.969741 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:01:42.969822 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:01:42.971554 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:01:42.971605 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:01:42.973421 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:01:42.973466 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:01:42.975391 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:01:42.975433 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:01:42.977665 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:01:42.979861 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:01:42.985721 systemd-networkd[764]: eth0: DHCPv6 lease lost Jan 30 13:01:42.986385 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:01:42.986529 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:01:42.990625 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:01:42.990877 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:01:42.994054 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:01:42.994108 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:01:43.005853 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:01:43.007659 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:01:43.007742 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:01:43.012960 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:01:43.013017 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:01:43.015278 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:01:43.015324 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:01:43.021529 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:01:43.021583 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:01:43.024549 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:01:43.031967 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:01:43.033501 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:01:43.036083 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:01:43.036171 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:01:43.038101 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:01:43.038148 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:01:43.044195 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:01:43.044442 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:01:43.046149 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:01:43.046187 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:01:43.047440 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:01:43.047473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:01:43.048612 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:01:43.048683 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:01:43.051579 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:01:43.051637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:01:43.055296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:01:43.055340 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:01:43.069852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:01:43.072291 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:01:43.072361 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:01:43.074712 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:01:43.074759 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:01:43.076973 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:01:43.077031 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:01:43.079587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:01:43.079646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:01:43.082168 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:01:43.082258 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:01:43.084750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:01:43.087013 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:01:43.096422 systemd[1]: Switching root. Jan 30 13:01:43.130013 systemd-journald[237]: Journal stopped Jan 30 13:01:43.944591 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 30 13:01:43.944658 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:01:43.944797 kernel: SELinux: policy capability open_perms=1 Jan 30 13:01:43.944808 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:01:43.944823 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:01:43.944832 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:01:43.944842 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:01:43.944852 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:01:43.944861 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:01:43.944871 kernel: audit: type=1403 audit(1738242103.280:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:01:43.944882 systemd[1]: Successfully loaded SELinux policy in 40.354ms. Jan 30 13:01:43.944904 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.692ms. Jan 30 13:01:43.944916 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:01:43.944927 systemd[1]: Detected virtualization kvm. Jan 30 13:01:43.944938 systemd[1]: Detected architecture arm64. Jan 30 13:01:43.944952 systemd[1]: Detected first boot. Jan 30 13:01:43.944962 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:01:43.944973 zram_generator::config[1045]: No configuration found. Jan 30 13:01:43.944984 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:01:43.944996 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:01:43.945006 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:01:43.945017 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:01:43.945028 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:01:43.945039 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:01:43.945050 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:01:43.945060 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:01:43.945070 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:01:43.945083 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:01:43.945093 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:01:43.945104 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:01:43.945114 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:01:43.945125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:01:43.945135 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:01:43.945146 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:01:43.945157 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:01:43.945168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:01:43.945182 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:01:43.945193 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:01:43.945203 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:01:43.945213 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:01:43.945237 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:01:43.945248 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:01:43.945258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:01:43.945270 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:01:43.945282 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:01:43.945292 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:01:43.945303 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:01:43.945314 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:01:43.945324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:01:43.945336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:01:43.945346 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:01:43.945357 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:01:43.945368 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:01:43.945380 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:01:43.945390 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:01:43.945401 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:01:43.945411 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:01:43.945423 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:01:43.945434 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:01:43.945445 systemd[1]: Reached target machines.target - Containers. Jan 30 13:01:43.945455 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:01:43.945466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:01:43.945479 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:01:43.945490 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:01:43.945500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:01:43.945511 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:01:43.945521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:01:43.945532 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:01:43.945542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:01:43.945553 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:01:43.945565 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:01:43.945575 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:01:43.945586 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:01:43.945596 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:01:43.945606 kernel: loop: module loaded Jan 30 13:01:43.945616 kernel: ACPI: bus type drm_connector registered Jan 30 13:01:43.945632 kernel: fuse: init (API version 7.39) Jan 30 13:01:43.945642 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:01:43.945653 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:01:43.945672 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:01:43.945684 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:01:43.945715 systemd-journald[1112]: Collecting audit messages is disabled. Jan 30 13:01:43.945736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:01:43.945748 systemd-journald[1112]: Journal started Jan 30 13:01:43.945769 systemd-journald[1112]: Runtime Journal (/run/log/journal/f9d45b18a2964dc7b6d17aeee0eb157f) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:01:43.705545 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:01:43.737461 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:01:43.737867 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:01:43.947890 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:01:43.947928 systemd[1]: Stopped verity-setup.service. Jan 30 13:01:43.959759 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:01:43.960404 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:01:43.961634 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:01:43.963453 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:01:43.964584 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:01:43.965829 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:01:43.967963 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:01:43.969908 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:01:43.971725 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:01:43.975092 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:01:43.975319 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:01:43.976831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:01:43.977044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:01:43.978477 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:01:43.978713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:01:43.980028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:01:43.980234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:01:43.981793 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:01:43.982029 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:01:43.983392 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:01:43.983595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:01:43.985074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:01:43.986551 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:01:43.988264 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:01:44.000129 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:01:44.019823 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:01:44.022147 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:01:44.023320 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:01:44.023375 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:01:44.025442 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:01:44.027985 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:01:44.030233 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:01:44.031514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:01:44.033958 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:01:44.036865 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:01:44.038255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:01:44.041869 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:01:44.043261 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:01:44.049799 systemd-journald[1112]: Time spent on flushing to /var/log/journal/f9d45b18a2964dc7b6d17aeee0eb157f is 20.629ms for 854 entries. Jan 30 13:01:44.049799 systemd-journald[1112]: System Journal (/var/log/journal/f9d45b18a2964dc7b6d17aeee0eb157f) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:01:44.080987 systemd-journald[1112]: Received client request to flush runtime journal. Jan 30 13:01:44.046890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:01:44.053095 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:01:44.070884 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:01:44.077771 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:01:44.079337 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:01:44.081064 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:01:44.082821 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:01:44.084522 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:01:44.086250 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:01:44.088021 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:01:44.092114 kernel: loop0: detected capacity change from 0 to 189592 Jan 30 13:01:44.094461 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:01:44.103148 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 30 13:01:44.103336 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 30 13:01:44.105986 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:01:44.111698 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:01:44.111702 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:01:44.114740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:01:44.124119 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:01:44.126732 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:01:44.128730 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:01:44.134689 kernel: loop1: detected capacity change from 0 to 114432 Jan 30 13:01:44.134882 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:01:44.153529 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:01:44.162942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:01:44.171699 kernel: loop2: detected capacity change from 0 to 114328 Jan 30 13:01:44.179229 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 30 13:01:44.179251 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 30 13:01:44.183508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:01:44.214744 kernel: loop3: detected capacity change from 0 to 189592 Jan 30 13:01:44.221718 kernel: loop4: detected capacity change from 0 to 114432 Jan 30 13:01:44.226752 kernel: loop5: detected capacity change from 0 to 114328 Jan 30 13:01:44.229356 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:01:44.230172 (sd-merge)[1185]: Merged extensions into '/usr'. Jan 30 13:01:44.234783 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:01:44.234903 systemd[1]: Reloading... Jan 30 13:01:44.286712 zram_generator::config[1209]: No configuration found. Jan 30 13:01:44.346875 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:01:44.392871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:01:44.429264 systemd[1]: Reloading finished in 193 ms. Jan 30 13:01:44.459060 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:01:44.460597 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:01:44.480932 systemd[1]: Starting ensure-sysext.service... Jan 30 13:01:44.483136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:01:44.492746 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:01:44.492763 systemd[1]: Reloading... Jan 30 13:01:44.502188 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:01:44.502463 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:01:44.503138 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:01:44.503360 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 30 13:01:44.503408 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 30 13:01:44.510235 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:01:44.510388 systemd-tmpfiles[1246]: Skipping /boot Jan 30 13:01:44.522718 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:01:44.522842 systemd-tmpfiles[1246]: Skipping /boot Jan 30 13:01:44.540859 zram_generator::config[1273]: No configuration found. Jan 30 13:01:44.634572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:01:44.671808 systemd[1]: Reloading finished in 178 ms. Jan 30 13:01:44.689741 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:01:44.703194 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:01:44.712991 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:01:44.716068 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:01:44.718895 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:01:44.727006 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:01:44.741732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:01:44.746748 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:01:44.750139 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:01:44.761795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:01:44.767968 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:01:44.772979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:01:44.779084 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:01:44.780951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:01:44.782407 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:01:44.785802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:01:44.785996 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:01:44.788304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:01:44.790411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:01:44.791484 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Jan 30 13:01:44.792704 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:01:44.792853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:01:44.800074 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:01:44.803135 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:01:44.812362 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:01:44.814896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:01:44.823292 systemd[1]: Finished ensure-sysext.service. Jan 30 13:01:44.827628 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:01:44.836000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:01:44.841856 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:01:44.843114 augenrules[1356]: No rules Jan 30 13:01:44.844191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:01:44.846915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:01:44.849002 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:01:44.853881 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:01:44.863695 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1359) Jan 30 13:01:44.863893 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:01:44.869154 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:01:44.871762 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:01:44.872394 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:01:44.876252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:01:44.876412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:01:44.882585 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:01:44.882804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:01:44.884454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:01:44.884632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:01:44.887405 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:01:44.888756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:01:44.898233 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:01:44.908227 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:01:44.914944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:01:44.916083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:01:44.916155 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:01:44.930465 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:01:44.933716 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:01:45.007556 systemd-resolved[1314]: Positive Trust Anchors: Jan 30 13:01:45.007564 systemd-networkd[1371]: lo: Link UP Jan 30 13:01:45.007570 systemd-networkd[1371]: lo: Gained carrier Jan 30 13:01:45.007577 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:01:45.007610 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:01:45.008408 systemd-networkd[1371]: Enumeration completed Jan 30 13:01:45.012012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:01:45.013754 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:01:45.014805 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:01:45.014811 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:01:45.015123 systemd-resolved[1314]: Defaulting to hostname 'linux'. Jan 30 13:01:45.015350 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:01:45.016230 systemd-networkd[1371]: eth0: Link UP Jan 30 13:01:45.016288 systemd-networkd[1371]: eth0: Gained carrier Jan 30 13:01:45.016343 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:01:45.017111 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:01:45.019552 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:01:45.020968 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:01:45.022466 systemd[1]: Reached target network.target - Network. Jan 30 13:01:45.023505 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:01:45.029736 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:01:45.034769 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:01:45.036081 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Jan 30 13:01:44.595964 systemd-timesyncd[1373]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:01:44.613306 systemd-journald[1112]: Time jumped backwards, rotating. Jan 30 13:01:44.596010 systemd-timesyncd[1373]: Initial clock synchronization to Thu 2025-01-30 13:01:44.595859 UTC. Jan 30 13:01:44.596070 systemd-resolved[1314]: Clock change detected. Flushing caches. Jan 30 13:01:44.601852 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:01:44.623502 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:01:44.642664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:01:44.663198 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:01:44.666831 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:01:44.667947 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:01:44.669139 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:01:44.670422 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:01:44.672048 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:01:44.673240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:01:44.674533 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:01:44.675923 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:01:44.675976 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:01:44.676870 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:01:44.679202 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:01:44.681829 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:01:44.689660 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:01:44.692165 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:01:44.694247 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:01:44.695513 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:01:44.696534 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:01:44.697568 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:01:44.697608 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:01:44.698706 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:01:44.700969 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:01:44.703753 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:01:44.703754 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:01:44.708868 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:01:44.713637 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:01:44.717856 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:01:44.733292 jq[1415]: false Jan 30 13:01:44.726585 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:01:44.731067 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:01:44.733794 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:01:44.743590 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:01:44.762550 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:01:44.763373 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:01:44.765253 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:01:44.767965 extend-filesystems[1416]: Found loop3 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found loop4 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found loop5 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found vda Jan 30 13:01:44.767965 extend-filesystems[1416]: Found vda1 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found vda2 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found vda3 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found usr Jan 30 13:01:44.767965 extend-filesystems[1416]: Found vda4 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found vda6 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found vda7 Jan 30 13:01:44.767965 extend-filesystems[1416]: Found vda9 Jan 30 13:01:44.767965 extend-filesystems[1416]: Checking size of /dev/vda9 Jan 30 13:01:44.769151 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:01:44.778797 dbus-daemon[1414]: [system] SELinux support is enabled Jan 30 13:01:44.771513 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:01:44.778145 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:01:44.801196 jq[1430]: true Jan 30 13:01:44.778317 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:01:44.779320 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:01:44.787007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:01:44.787894 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:01:44.796720 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:01:44.796906 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:01:44.801044 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:01:44.801083 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:01:44.803840 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:01:44.803873 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:01:44.804521 jq[1437]: true Jan 30 13:01:44.828742 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:01:44.830927 extend-filesystems[1416]: Resized partition /dev/vda9 Jan 30 13:01:44.835262 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:01:44.835714 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:01:44.858944 tar[1435]: linux-arm64/helm Jan 30 13:01:44.859160 update_engine[1428]: I20250130 13:01:44.846689 1428 main.cc:92] Flatcar Update Engine starting Jan 30 13:01:44.859160 update_engine[1428]: I20250130 13:01:44.851891 1428 update_check_scheduler.cc:74] Next update check in 7m42s Jan 30 13:01:44.836261 systemd-logind[1423]: New seat seat0. Jan 30 13:01:44.851868 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:01:44.863517 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:01:44.866678 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1348) Jan 30 13:01:44.868649 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:01:44.872922 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:01:44.945455 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:01:44.958646 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:01:45.002952 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:01:45.002952 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:01:45.002952 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:01:45.014404 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Jan 30 13:01:45.003988 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:01:45.015566 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:01:45.005679 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:01:45.009092 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:01:45.013690 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:01:45.202395 containerd[1447]: time="2025-01-30T13:01:45.200173909Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:01:45.233140 containerd[1447]: time="2025-01-30T13:01:45.233086069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.234700269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.234743989Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.234761949Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.234937749Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.234960509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.235018629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.235031629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.235192669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.235207269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.235220269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:01:45.235852 containerd[1447]: time="2025-01-30T13:01:45.235230149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:01:45.236069 containerd[1447]: time="2025-01-30T13:01:45.235316829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:01:45.236069 containerd[1447]: time="2025-01-30T13:01:45.235501149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:01:45.236069 containerd[1447]: time="2025-01-30T13:01:45.235647149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:01:45.236069 containerd[1447]: time="2025-01-30T13:01:45.235678389Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:01:45.236069 containerd[1447]: time="2025-01-30T13:01:45.235776669Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:01:45.236069 containerd[1447]: time="2025-01-30T13:01:45.235815229Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:01:45.279466 containerd[1447]: time="2025-01-30T13:01:45.279414389Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:01:45.279723 containerd[1447]: time="2025-01-30T13:01:45.279698349Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:01:45.279930 containerd[1447]: time="2025-01-30T13:01:45.279909469Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:01:45.280072 containerd[1447]: time="2025-01-30T13:01:45.280054309Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:01:45.280250 containerd[1447]: time="2025-01-30T13:01:45.280232789Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:01:45.280584 containerd[1447]: time="2025-01-30T13:01:45.280543229Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:01:45.282965 containerd[1447]: time="2025-01-30T13:01:45.282869709Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:01:45.283404 containerd[1447]: time="2025-01-30T13:01:45.283372589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:01:45.283744 containerd[1447]: time="2025-01-30T13:01:45.283608829Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:01:45.283744 containerd[1447]: time="2025-01-30T13:01:45.283706389Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:01:45.284471 containerd[1447]: time="2025-01-30T13:01:45.283730469Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:01:45.284471 containerd[1447]: time="2025-01-30T13:01:45.284394909Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:01:45.284790 containerd[1447]: time="2025-01-30T13:01:45.284634869Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:01:45.284790 containerd[1447]: time="2025-01-30T13:01:45.284734229Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:01:45.284790 containerd[1447]: time="2025-01-30T13:01:45.284753309Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:01:45.284790 containerd[1447]: time="2025-01-30T13:01:45.284770829Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:01:45.284994 containerd[1447]: time="2025-01-30T13:01:45.284960549Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:01:45.285119 containerd[1447]: time="2025-01-30T13:01:45.285049629Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:01:45.285119 containerd[1447]: time="2025-01-30T13:01:45.285088269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285214 containerd[1447]: time="2025-01-30T13:01:45.285105749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285401 containerd[1447]: time="2025-01-30T13:01:45.285327229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285401 containerd[1447]: time="2025-01-30T13:01:45.285352949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285401 containerd[1447]: time="2025-01-30T13:01:45.285368949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285603 containerd[1447]: time="2025-01-30T13:01:45.285547389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285603 containerd[1447]: time="2025-01-30T13:01:45.285578429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285829 containerd[1447]: time="2025-01-30T13:01:45.285750429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285829 containerd[1447]: time="2025-01-30T13:01:45.285779749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285829 containerd[1447]: time="2025-01-30T13:01:45.285799469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285991 containerd[1447]: time="2025-01-30T13:01:45.285812989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285991 containerd[1447]: time="2025-01-30T13:01:45.285934229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285991 containerd[1447]: time="2025-01-30T13:01:45.285948749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.285991 containerd[1447]: time="2025-01-30T13:01:45.285967669Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:01:45.286645 containerd[1447]: time="2025-01-30T13:01:45.286216589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.286645 containerd[1447]: time="2025-01-30T13:01:45.286309909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.286645 containerd[1447]: time="2025-01-30T13:01:45.286328789Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:01:45.286900 containerd[1447]: time="2025-01-30T13:01:45.286880909Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:01:45.288404 tar[1435]: linux-arm64/LICENSE Jan 30 13:01:45.288404 tar[1435]: linux-arm64/README.md Jan 30 13:01:45.288692 containerd[1447]: time="2025-01-30T13:01:45.288587469Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:01:45.288692 containerd[1447]: time="2025-01-30T13:01:45.288650269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:01:45.288692 containerd[1447]: time="2025-01-30T13:01:45.288667989Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:01:45.288784 containerd[1447]: time="2025-01-30T13:01:45.288678589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.288855 containerd[1447]: time="2025-01-30T13:01:45.288825509Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:01:45.288913 containerd[1447]: time="2025-01-30T13:01:45.288844349Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:01:45.288995 containerd[1447]: time="2025-01-30T13:01:45.288968629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:01:45.291470 containerd[1447]: time="2025-01-30T13:01:45.291288509Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:01:45.291470 containerd[1447]: time="2025-01-30T13:01:45.291423629Z" level=info msg="Connect containerd service" Jan 30 13:01:45.291792 containerd[1447]: time="2025-01-30T13:01:45.291663029Z" level=info msg="using legacy CRI server" Jan 30 13:01:45.291792 containerd[1447]: time="2025-01-30T13:01:45.291684509Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:01:45.292033 containerd[1447]: time="2025-01-30T13:01:45.291975789Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:01:45.293245 containerd[1447]: time="2025-01-30T13:01:45.293118349Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:01:45.293788 containerd[1447]: time="2025-01-30T13:01:45.293517749Z" level=info msg="Start subscribing containerd event" Jan 30 13:01:45.293788 containerd[1447]: time="2025-01-30T13:01:45.293609709Z" level=info msg="Start recovering state" Jan 30 13:01:45.293788 containerd[1447]: time="2025-01-30T13:01:45.293707749Z" level=info msg="Start event monitor" Jan 30 13:01:45.293788 containerd[1447]: time="2025-01-30T13:01:45.293720949Z" level=info msg="Start snapshots syncer" Jan 30 13:01:45.293788 containerd[1447]: time="2025-01-30T13:01:45.293730909Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:01:45.293788 containerd[1447]: time="2025-01-30T13:01:45.293738029Z" level=info msg="Start streaming server" Jan 30 13:01:45.294730 containerd[1447]: time="2025-01-30T13:01:45.294710469Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:01:45.294848 containerd[1447]: time="2025-01-30T13:01:45.294823709Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:01:45.294994 containerd[1447]: time="2025-01-30T13:01:45.294978869Z" level=info msg="containerd successfully booted in 0.097704s" Jan 30 13:01:45.296406 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:01:45.301653 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:01:45.784217 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 30 13:01:45.787128 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:01:45.791249 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:01:45.799927 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:01:45.802820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:01:45.805077 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:01:45.825209 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:01:45.826722 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:01:45.828718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:01:45.831659 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:01:45.833178 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:01:45.853555 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:01:45.865140 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:01:45.870807 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:01:45.871064 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:01:45.881999 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:01:45.897343 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:01:45.900490 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:01:45.902932 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:01:45.904530 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:01:46.386951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:01:46.388567 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:01:46.390305 systemd[1]: Startup finished in 659ms (kernel) + 4.504s (initrd) + 3.601s (userspace) = 8.765s. Jan 30 13:01:46.391039 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:01:46.925534 kubelet[1526]: E0130 13:01:46.925427 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:01:46.927926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:01:46.928080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:01:51.454768 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:01:51.455918 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:57262.service - OpenSSH per-connection server daemon (10.0.0.1:57262). Jan 30 13:01:51.516240 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 57262 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:51.519106 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:51.532347 systemd-logind[1423]: New session 1 of user core. Jan 30 13:01:51.533454 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:01:51.545001 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:01:51.558102 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:01:51.571069 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:01:51.573964 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:01:51.654271 systemd[1543]: Queued start job for default target default.target. Jan 30 13:01:51.669720 systemd[1543]: Created slice app.slice - User Application Slice. Jan 30 13:01:51.669771 systemd[1543]: Reached target paths.target - Paths. Jan 30 13:01:51.669784 systemd[1543]: Reached target timers.target - Timers. Jan 30 13:01:51.671162 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:01:51.683711 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:01:51.683839 systemd[1543]: Reached target sockets.target - Sockets. Jan 30 13:01:51.683856 systemd[1543]: Reached target basic.target - Basic System. Jan 30 13:01:51.683897 systemd[1543]: Reached target default.target - Main User Target. Jan 30 13:01:51.683928 systemd[1543]: Startup finished in 102ms. Jan 30 13:01:51.684258 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:01:51.686145 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:01:51.754904 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:57270.service - OpenSSH per-connection server daemon (10.0.0.1:57270). Jan 30 13:01:51.804541 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 57270 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:51.806192 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:51.810665 systemd-logind[1423]: New session 2 of user core. Jan 30 13:01:51.818858 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:01:51.872895 sshd[1554]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:51.892051 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:57270.service: Deactivated successfully. Jan 30 13:01:51.894451 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:01:51.896409 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:01:51.922064 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:57276.service - OpenSSH per-connection server daemon (10.0.0.1:57276). Jan 30 13:01:51.923007 systemd-logind[1423]: Removed session 2. Jan 30 13:01:51.962692 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 57276 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:51.963424 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:51.968608 systemd-logind[1423]: New session 3 of user core. Jan 30 13:01:51.987872 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:01:52.055610 sshd[1561]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:52.072822 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:57276.service: Deactivated successfully. Jan 30 13:01:52.075264 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:01:52.077313 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:01:52.085143 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:57282.service - OpenSSH per-connection server daemon (10.0.0.1:57282). Jan 30 13:01:52.086181 systemd-logind[1423]: Removed session 3. Jan 30 13:01:52.129866 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 57282 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:52.131917 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:52.137701 systemd-logind[1423]: New session 4 of user core. Jan 30 13:01:52.148864 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:01:52.211382 sshd[1568]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:52.224413 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:57282.service: Deactivated successfully. Jan 30 13:01:52.229877 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:01:52.233685 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:01:52.246095 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:57284.service - OpenSSH per-connection server daemon (10.0.0.1:57284). Jan 30 13:01:52.247230 systemd-logind[1423]: Removed session 4. Jan 30 13:01:52.283236 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 57284 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:52.285368 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:52.291025 systemd-logind[1423]: New session 5 of user core. Jan 30 13:01:52.309864 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:01:52.439532 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:01:52.440021 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:01:52.461866 sudo[1578]: pam_unix(sudo:session): session closed for user root Jan 30 13:01:52.471323 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:52.493927 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:38642.service - OpenSSH per-connection server daemon (10.0.0.1:38642). Jan 30 13:01:52.494512 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:57284.service: Deactivated successfully. Jan 30 13:01:52.497734 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:01:52.500677 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:01:52.504145 systemd-logind[1423]: Removed session 5. Jan 30 13:01:52.531279 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 38642 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:52.533339 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:52.544319 systemd-logind[1423]: New session 6 of user core. Jan 30 13:01:52.554067 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:01:52.606558 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:01:52.607338 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:01:52.612093 sudo[1587]: pam_unix(sudo:session): session closed for user root Jan 30 13:01:52.620471 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:01:52.621037 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:01:52.646932 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:01:52.649261 auditctl[1590]: No rules Jan 30 13:01:52.650261 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:01:52.650479 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:01:52.652290 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:01:52.683977 augenrules[1608]: No rules Jan 30 13:01:52.686684 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:01:52.689290 sudo[1586]: pam_unix(sudo:session): session closed for user root Jan 30 13:01:52.692658 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:52.702205 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:38642.service: Deactivated successfully. Jan 30 13:01:52.703842 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:01:52.705073 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:01:52.706368 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:38646.service - OpenSSH per-connection server daemon (10.0.0.1:38646). Jan 30 13:01:52.707250 systemd-logind[1423]: Removed session 6. Jan 30 13:01:52.743160 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 38646 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:52.745403 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:52.751277 systemd-logind[1423]: New session 7 of user core. Jan 30 13:01:52.765819 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:01:52.821899 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:01:52.822198 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:01:53.204413 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:01:53.206549 (dockerd)[1639]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:01:53.539147 dockerd[1639]: time="2025-01-30T13:01:53.539013309Z" level=info msg="Starting up" Jan 30 13:01:53.708928 dockerd[1639]: time="2025-01-30T13:01:53.708887189Z" level=info msg="Loading containers: start." Jan 30 13:01:53.807339 kernel: Initializing XFRM netlink socket Jan 30 13:01:53.895428 systemd-networkd[1371]: docker0: Link UP Jan 30 13:01:53.914162 dockerd[1639]: time="2025-01-30T13:01:53.914084509Z" level=info msg="Loading containers: done." Jan 30 13:01:53.927460 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1732996411-merged.mount: Deactivated successfully. Jan 30 13:01:53.929654 dockerd[1639]: time="2025-01-30T13:01:53.929590709Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:01:53.929766 dockerd[1639]: time="2025-01-30T13:01:53.929728029Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:01:53.929877 dockerd[1639]: time="2025-01-30T13:01:53.929848509Z" level=info msg="Daemon has completed initialization" Jan 30 13:01:53.977902 dockerd[1639]: time="2025-01-30T13:01:53.977756429Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:01:53.978068 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:01:54.593639 containerd[1447]: time="2025-01-30T13:01:54.593554229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:01:55.254477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1512564012.mount: Deactivated successfully. Jan 30 13:01:56.099790 containerd[1447]: time="2025-01-30T13:01:56.099664269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:56.100702 containerd[1447]: time="2025-01-30T13:01:56.100661389Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 30 13:01:56.106215 containerd[1447]: time="2025-01-30T13:01:56.106144429Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:56.109361 containerd[1447]: time="2025-01-30T13:01:56.109303629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:56.110502 containerd[1447]: time="2025-01-30T13:01:56.110454429Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 1.51684408s" Jan 30 13:01:56.110502 containerd[1447]: time="2025-01-30T13:01:56.110500909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 13:01:56.111415 containerd[1447]: time="2025-01-30T13:01:56.111372109Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:01:57.168942 containerd[1447]: time="2025-01-30T13:01:57.168858229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:57.169828 containerd[1447]: time="2025-01-30T13:01:57.169789949Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 30 13:01:57.170804 containerd[1447]: time="2025-01-30T13:01:57.170767869Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:57.176077 containerd[1447]: time="2025-01-30T13:01:57.176004309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:57.177253 containerd[1447]: time="2025-01-30T13:01:57.177217189Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.06580696s" Jan 30 13:01:57.177321 containerd[1447]: time="2025-01-30T13:01:57.177260469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 13:01:57.178431 containerd[1447]: time="2025-01-30T13:01:57.178221149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:01:57.178344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:01:57.190876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:01:57.296426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:01:57.302335 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:01:57.351749 kubelet[1854]: E0130 13:01:57.351700 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:01:57.355453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:01:57.355691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:01:58.458220 containerd[1447]: time="2025-01-30T13:01:58.458011109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:58.459040 containerd[1447]: time="2025-01-30T13:01:58.458992189Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 30 13:01:58.459726 containerd[1447]: time="2025-01-30T13:01:58.459673589Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:58.462998 containerd[1447]: time="2025-01-30T13:01:58.462949949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:58.464205 containerd[1447]: time="2025-01-30T13:01:58.464081109Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.28582544s" Jan 30 13:01:58.464205 containerd[1447]: time="2025-01-30T13:01:58.464119869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 13:01:58.464585 containerd[1447]: time="2025-01-30T13:01:58.464524029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:01:59.483965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134894391.mount: Deactivated successfully. Jan 30 13:01:59.693498 containerd[1447]: time="2025-01-30T13:01:59.693442309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:59.694067 containerd[1447]: time="2025-01-30T13:01:59.694029349Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 30 13:01:59.695137 containerd[1447]: time="2025-01-30T13:01:59.695084149Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:59.698539 containerd[1447]: time="2025-01-30T13:01:59.698491869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:59.699565 containerd[1447]: time="2025-01-30T13:01:59.699127869Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.23439216s" Jan 30 13:01:59.699565 containerd[1447]: time="2025-01-30T13:01:59.699159749Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 13:01:59.699565 containerd[1447]: time="2025-01-30T13:01:59.699560229Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:02:00.261988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902113552.mount: Deactivated successfully. Jan 30 13:02:00.791790 containerd[1447]: time="2025-01-30T13:02:00.791732709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:00.792303 containerd[1447]: time="2025-01-30T13:02:00.792263629Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 13:02:00.795084 containerd[1447]: time="2025-01-30T13:02:00.795046029Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:00.798756 containerd[1447]: time="2025-01-30T13:02:00.798714229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:00.800000 containerd[1447]: time="2025-01-30T13:02:00.799966869Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.10037952s" Jan 30 13:02:00.800042 containerd[1447]: time="2025-01-30T13:02:00.800000309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 13:02:00.800503 containerd[1447]: time="2025-01-30T13:02:00.800470749Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:02:01.253465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132010904.mount: Deactivated successfully. Jan 30 13:02:01.261654 containerd[1447]: time="2025-01-30T13:02:01.261586309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:01.262144 containerd[1447]: time="2025-01-30T13:02:01.262097829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 30 13:02:01.263186 containerd[1447]: time="2025-01-30T13:02:01.263156429Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:01.265672 containerd[1447]: time="2025-01-30T13:02:01.265583549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:01.266710 containerd[1447]: time="2025-01-30T13:02:01.266665589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 466.15836ms" Jan 30 13:02:01.266710 containerd[1447]: time="2025-01-30T13:02:01.266706709Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 13:02:01.267162 containerd[1447]: time="2025-01-30T13:02:01.267134509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:02:01.816087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437727568.mount: Deactivated successfully. Jan 30 13:02:03.016144 containerd[1447]: time="2025-01-30T13:02:03.016088629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:03.017432 containerd[1447]: time="2025-01-30T13:02:03.017401069Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 30 13:02:03.018348 containerd[1447]: time="2025-01-30T13:02:03.018296949Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:03.023443 containerd[1447]: time="2025-01-30T13:02:03.023392389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:03.024611 containerd[1447]: time="2025-01-30T13:02:03.024575189Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.75740828s" Jan 30 13:02:03.024811 containerd[1447]: time="2025-01-30T13:02:03.024706669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 13:02:07.005186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:02:07.015879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:02:07.040284 systemd[1]: Reloading requested from client PID 2005 ('systemctl') (unit session-7.scope)... Jan 30 13:02:07.040301 systemd[1]: Reloading... Jan 30 13:02:07.105670 zram_generator::config[2044]: No configuration found. Jan 30 13:02:07.226398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:02:07.282989 systemd[1]: Reloading finished in 242 ms. Jan 30 13:02:07.331921 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:02:07.331987 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:02:07.332201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:02:07.334531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:02:07.459150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:02:07.464834 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:02:07.511338 kubelet[2090]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:02:07.511338 kubelet[2090]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:02:07.511338 kubelet[2090]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:02:07.511338 kubelet[2090]: I0130 13:02:07.509369 2090 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:02:08.394276 kubelet[2090]: I0130 13:02:08.394217 2090 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:02:08.394276 kubelet[2090]: I0130 13:02:08.394259 2090 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:02:08.394529 kubelet[2090]: I0130 13:02:08.394507 2090 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:02:08.514470 kubelet[2090]: E0130 13:02:08.514419 2090 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:08.515213 kubelet[2090]: I0130 13:02:08.515011 2090 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:02:08.526752 kubelet[2090]: E0130 13:02:08.526713 2090 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:02:08.526957 kubelet[2090]: I0130 13:02:08.526943 2090 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:02:08.530612 kubelet[2090]: I0130 13:02:08.530578 2090 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:02:08.531589 kubelet[2090]: I0130 13:02:08.531508 2090 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:02:08.531737 kubelet[2090]: I0130 13:02:08.531690 2090 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:02:08.531924 kubelet[2090]: I0130 13:02:08.531727 2090 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:02:08.532172 kubelet[2090]: I0130 13:02:08.532139 2090 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:02:08.532172 kubelet[2090]: I0130 13:02:08.532161 2090 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:02:08.532525 kubelet[2090]: I0130 13:02:08.532505 2090 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:02:08.534964 kubelet[2090]: I0130 13:02:08.534933 2090 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:02:08.534999 kubelet[2090]: I0130 13:02:08.534966 2090 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:02:08.535131 kubelet[2090]: I0130 13:02:08.535115 2090 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:02:08.535131 kubelet[2090]: I0130 13:02:08.535130 2090 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:02:08.538140 kubelet[2090]: W0130 13:02:08.538075 2090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 30 13:02:08.538192 kubelet[2090]: E0130 13:02:08.538147 2090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:08.541444 kubelet[2090]: W0130 13:02:08.541391 2090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 30 13:02:08.541527 kubelet[2090]: E0130 13:02:08.541456 2090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:08.544223 kubelet[2090]: I0130 13:02:08.544196 2090 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:02:08.547166 kubelet[2090]: I0130 13:02:08.547146 2090 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:02:08.548472 kubelet[2090]: W0130 13:02:08.548445 2090 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:02:08.552644 kubelet[2090]: I0130 13:02:08.549667 2090 server.go:1269] "Started kubelet" Jan 30 13:02:08.552644 kubelet[2090]: I0130 13:02:08.550943 2090 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:02:08.552644 kubelet[2090]: I0130 13:02:08.551744 2090 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:02:08.552644 kubelet[2090]: I0130 13:02:08.552213 2090 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:02:08.552974 kubelet[2090]: I0130 13:02:08.552920 2090 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:02:08.553303 kubelet[2090]: I0130 13:02:08.553155 2090 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:02:08.553924 kubelet[2090]: I0130 13:02:08.553843 2090 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:02:08.554008 kubelet[2090]: I0130 13:02:08.553992 2090 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:02:08.554063 kubelet[2090]: I0130 13:02:08.554047 2090 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:02:08.554287 kubelet[2090]: E0130 13:02:08.552672 2090 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7a019dd1eeed default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:02:08.549637869 +0000 UTC m=+1.080500201,LastTimestamp:2025-01-30 13:02:08.549637869 +0000 UTC m=+1.080500201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:02:08.554380 kubelet[2090]: W0130 13:02:08.554349 2090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 30 13:02:08.554420 kubelet[2090]: E0130 13:02:08.554395 2090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:08.555108 kubelet[2090]: I0130 13:02:08.554951 2090 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:02:08.555108 kubelet[2090]: E0130 13:02:08.555029 2090 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:02:08.555250 kubelet[2090]: I0130 13:02:08.555230 2090 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:02:08.555348 kubelet[2090]: I0130 13:02:08.555329 2090 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:02:08.555434 kubelet[2090]: E0130 13:02:08.555409 2090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Jan 30 13:02:08.557192 kubelet[2090]: E0130 13:02:08.557171 2090 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:02:08.560175 kubelet[2090]: I0130 13:02:08.559939 2090 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:02:08.572211 kubelet[2090]: I0130 13:02:08.571991 2090 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:02:08.574536 kubelet[2090]: I0130 13:02:08.573874 2090 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:02:08.574536 kubelet[2090]: I0130 13:02:08.573901 2090 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:02:08.574536 kubelet[2090]: I0130 13:02:08.573921 2090 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:02:08.574536 kubelet[2090]: E0130 13:02:08.573963 2090 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:02:08.574801 kubelet[2090]: W0130 13:02:08.574564 2090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 30 13:02:08.574801 kubelet[2090]: E0130 13:02:08.574607 2090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:08.577392 kubelet[2090]: I0130 13:02:08.577372 2090 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:02:08.577772 kubelet[2090]: I0130 13:02:08.577524 2090 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:02:08.577772 kubelet[2090]: I0130 13:02:08.577546 2090 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:02:08.584156 kubelet[2090]: I0130 13:02:08.584128 2090 policy_none.go:49] "None policy: Start" Jan 30 13:02:08.585179 kubelet[2090]: I0130 13:02:08.585152 2090 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:02:08.585301 kubelet[2090]: I0130 13:02:08.585291 2090 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:02:08.594586 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:02:08.608062 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:02:08.611470 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:02:08.624845 kubelet[2090]: I0130 13:02:08.624705 2090 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:02:08.625657 kubelet[2090]: I0130 13:02:08.624967 2090 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:02:08.625657 kubelet[2090]: I0130 13:02:08.624991 2090 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:02:08.625657 kubelet[2090]: I0130 13:02:08.625458 2090 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:02:08.629642 kubelet[2090]: E0130 13:02:08.627212 2090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:02:08.682426 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 30 13:02:08.713457 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 30 13:02:08.718707 systemd[1]: Created slice kubepods-burstable-pod76952b2975cac0c5f32dbd29f04e6efe.slice - libcontainer container kubepods-burstable-pod76952b2975cac0c5f32dbd29f04e6efe.slice. Jan 30 13:02:08.726880 kubelet[2090]: I0130 13:02:08.726810 2090 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:02:08.727406 kubelet[2090]: E0130 13:02:08.727378 2090 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 30 13:02:08.756897 kubelet[2090]: E0130 13:02:08.756857 2090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Jan 30 13:02:08.855507 kubelet[2090]: I0130 13:02:08.855461 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:08.855507 kubelet[2090]: I0130 13:02:08.855500 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:08.855704 kubelet[2090]: I0130 13:02:08.855532 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:08.855704 kubelet[2090]: I0130 13:02:08.855553 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76952b2975cac0c5f32dbd29f04e6efe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76952b2975cac0c5f32dbd29f04e6efe\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:02:08.855704 kubelet[2090]: I0130 13:02:08.855571 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76952b2975cac0c5f32dbd29f04e6efe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76952b2975cac0c5f32dbd29f04e6efe\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:02:08.855876 kubelet[2090]: I0130 13:02:08.855843 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:08.855876 kubelet[2090]: I0130 13:02:08.855873 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:02:08.855947 kubelet[2090]: I0130 13:02:08.855889 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76952b2975cac0c5f32dbd29f04e6efe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76952b2975cac0c5f32dbd29f04e6efe\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:02:08.855947 kubelet[2090]: I0130 13:02:08.855904 2090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:08.928578 kubelet[2090]: I0130 13:02:08.928543 2090 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:02:08.928910 kubelet[2090]: E0130 13:02:08.928874 2090 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 30 13:02:09.004613 kubelet[2090]: E0130 13:02:09.004494 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:09.005291 containerd[1447]: time="2025-01-30T13:02:09.005238589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 30 13:02:09.016789 kubelet[2090]: E0130 13:02:09.016754 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:09.020237 containerd[1447]: time="2025-01-30T13:02:09.020189389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 30 13:02:09.021514 kubelet[2090]: E0130 13:02:09.021489 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:09.022108 containerd[1447]: time="2025-01-30T13:02:09.022070709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76952b2975cac0c5f32dbd29f04e6efe,Namespace:kube-system,Attempt:0,}" Jan 30 13:02:09.158195 kubelet[2090]: E0130 13:02:09.158140 2090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Jan 30 13:02:09.331048 kubelet[2090]: I0130 13:02:09.330645 2090 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:02:09.331048 kubelet[2090]: E0130 13:02:09.330995 2090 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 30 13:02:09.662554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245277823.mount: Deactivated successfully. Jan 30 13:02:09.668142 containerd[1447]: time="2025-01-30T13:02:09.668082629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:02:09.669325 containerd[1447]: time="2025-01-30T13:02:09.669280109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 13:02:09.672297 containerd[1447]: time="2025-01-30T13:02:09.672242389Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:02:09.673558 containerd[1447]: time="2025-01-30T13:02:09.673517629Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:02:09.674640 containerd[1447]: time="2025-01-30T13:02:09.674397149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:02:09.675215 containerd[1447]: time="2025-01-30T13:02:09.675180909Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:02:09.675254 containerd[1447]: time="2025-01-30T13:02:09.675183629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:02:09.677592 containerd[1447]: time="2025-01-30T13:02:09.677563229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 657.28572ms" Jan 30 13:02:09.678030 containerd[1447]: time="2025-01-30T13:02:09.677994509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:02:09.679201 containerd[1447]: time="2025-01-30T13:02:09.678878829Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 673.55568ms" Jan 30 13:02:09.686195 containerd[1447]: time="2025-01-30T13:02:09.686136509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 663.99248ms" Jan 30 13:02:09.832862 kubelet[2090]: W0130 13:02:09.832802 2090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 30 13:02:09.832862 kubelet[2090]: E0130 13:02:09.832868 2090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:09.841648 containerd[1447]: time="2025-01-30T13:02:09.841516189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:09.841648 containerd[1447]: time="2025-01-30T13:02:09.841581709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:09.841648 containerd[1447]: time="2025-01-30T13:02:09.841424269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:09.841648 containerd[1447]: time="2025-01-30T13:02:09.841483189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:09.841648 containerd[1447]: time="2025-01-30T13:02:09.841500109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:09.841878 containerd[1447]: time="2025-01-30T13:02:09.841611829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:09.841878 containerd[1447]: time="2025-01-30T13:02:09.841784589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:09.842050 containerd[1447]: time="2025-01-30T13:02:09.841875269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:09.842706 containerd[1447]: time="2025-01-30T13:02:09.842635869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:09.842706 containerd[1447]: time="2025-01-30T13:02:09.842679269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:09.842834 containerd[1447]: time="2025-01-30T13:02:09.842690989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:09.842989 containerd[1447]: time="2025-01-30T13:02:09.842950469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:09.860828 kubelet[2090]: W0130 13:02:09.860784 2090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 30 13:02:09.860946 kubelet[2090]: E0130 13:02:09.860833 2090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:09.861799 systemd[1]: Started cri-containerd-1d38b7decbbc57080a000a95e15921ca4463d6b5a83901b70d77a44a832852c1.scope - libcontainer container 1d38b7decbbc57080a000a95e15921ca4463d6b5a83901b70d77a44a832852c1. Jan 30 13:02:09.867032 systemd[1]: Started cri-containerd-0b53dfe923a3794b8b54ff2a965df3b7bab6ad793855def5589b01d539cf6a0c.scope - libcontainer container 0b53dfe923a3794b8b54ff2a965df3b7bab6ad793855def5589b01d539cf6a0c. Jan 30 13:02:09.868938 systemd[1]: Started cri-containerd-fed54394d8981209ba5e730d33e251301c851f32c9f3c4482aef33bb1c9e8d27.scope - libcontainer container fed54394d8981209ba5e730d33e251301c851f32c9f3c4482aef33bb1c9e8d27. Jan 30 13:02:09.911575 containerd[1447]: time="2025-01-30T13:02:09.911357509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d38b7decbbc57080a000a95e15921ca4463d6b5a83901b70d77a44a832852c1\"" Jan 30 13:02:09.911699 containerd[1447]: time="2025-01-30T13:02:09.911512549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76952b2975cac0c5f32dbd29f04e6efe,Namespace:kube-system,Attempt:0,} returns sandbox id \"fed54394d8981209ba5e730d33e251301c851f32c9f3c4482aef33bb1c9e8d27\"" Jan 30 13:02:09.911699 containerd[1447]: time="2025-01-30T13:02:09.911524909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b53dfe923a3794b8b54ff2a965df3b7bab6ad793855def5589b01d539cf6a0c\"" Jan 30 13:02:09.913743 kubelet[2090]: E0130 13:02:09.913162 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:09.913743 kubelet[2090]: E0130 13:02:09.913242 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:09.913743 kubelet[2090]: E0130 13:02:09.913379 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:09.916975 containerd[1447]: time="2025-01-30T13:02:09.916926269Z" level=info msg="CreateContainer within sandbox \"1d38b7decbbc57080a000a95e15921ca4463d6b5a83901b70d77a44a832852c1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:02:09.917517 containerd[1447]: time="2025-01-30T13:02:09.917486389Z" level=info msg="CreateContainer within sandbox \"fed54394d8981209ba5e730d33e251301c851f32c9f3c4482aef33bb1c9e8d27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:02:09.918493 containerd[1447]: time="2025-01-30T13:02:09.918441549Z" level=info msg="CreateContainer within sandbox \"0b53dfe923a3794b8b54ff2a965df3b7bab6ad793855def5589b01d539cf6a0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:02:09.934140 containerd[1447]: time="2025-01-30T13:02:09.934086749Z" level=info msg="CreateContainer within sandbox \"1d38b7decbbc57080a000a95e15921ca4463d6b5a83901b70d77a44a832852c1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da416a69836fc7dc64abbcd10cc7de40dfd6cf663c0a50e89f38e77e3125fefb\"" Jan 30 13:02:09.934885 containerd[1447]: time="2025-01-30T13:02:09.934848789Z" level=info msg="StartContainer for \"da416a69836fc7dc64abbcd10cc7de40dfd6cf663c0a50e89f38e77e3125fefb\"" Jan 30 13:02:09.939211 containerd[1447]: time="2025-01-30T13:02:09.939135109Z" level=info msg="CreateContainer within sandbox \"fed54394d8981209ba5e730d33e251301c851f32c9f3c4482aef33bb1c9e8d27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"25f221b274bf1a226435da3a22a9c9adbb14575f72755322b37ad547acddcbd2\"" Jan 30 13:02:09.940987 containerd[1447]: time="2025-01-30T13:02:09.939911389Z" level=info msg="StartContainer for \"25f221b274bf1a226435da3a22a9c9adbb14575f72755322b37ad547acddcbd2\"" Jan 30 13:02:09.944150 containerd[1447]: time="2025-01-30T13:02:09.944100109Z" level=info msg="CreateContainer within sandbox \"0b53dfe923a3794b8b54ff2a965df3b7bab6ad793855def5589b01d539cf6a0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a95aca2b6f77c0a1e853e3af45e43bdd19b8734cc0cc2aa11b30b84474de4e7\"" Jan 30 13:02:09.944798 containerd[1447]: time="2025-01-30T13:02:09.944773829Z" level=info msg="StartContainer for \"1a95aca2b6f77c0a1e853e3af45e43bdd19b8734cc0cc2aa11b30b84474de4e7\"" Jan 30 13:02:09.959340 kubelet[2090]: E0130 13:02:09.959278 2090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" Jan 30 13:02:09.962847 systemd[1]: Started cri-containerd-da416a69836fc7dc64abbcd10cc7de40dfd6cf663c0a50e89f38e77e3125fefb.scope - libcontainer container da416a69836fc7dc64abbcd10cc7de40dfd6cf663c0a50e89f38e77e3125fefb. Jan 30 13:02:09.969300 systemd[1]: Started cri-containerd-25f221b274bf1a226435da3a22a9c9adbb14575f72755322b37ad547acddcbd2.scope - libcontainer container 25f221b274bf1a226435da3a22a9c9adbb14575f72755322b37ad547acddcbd2. Jan 30 13:02:09.981871 systemd[1]: Started cri-containerd-1a95aca2b6f77c0a1e853e3af45e43bdd19b8734cc0cc2aa11b30b84474de4e7.scope - libcontainer container 1a95aca2b6f77c0a1e853e3af45e43bdd19b8734cc0cc2aa11b30b84474de4e7. Jan 30 13:02:09.996151 kubelet[2090]: W0130 13:02:09.996085 2090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 30 13:02:09.996246 kubelet[2090]: E0130 13:02:09.996157 2090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:10.045475 containerd[1447]: time="2025-01-30T13:02:10.045085189Z" level=info msg="StartContainer for \"25f221b274bf1a226435da3a22a9c9adbb14575f72755322b37ad547acddcbd2\" returns successfully" Jan 30 13:02:10.045475 containerd[1447]: time="2025-01-30T13:02:10.045229509Z" level=info msg="StartContainer for \"da416a69836fc7dc64abbcd10cc7de40dfd6cf663c0a50e89f38e77e3125fefb\" returns successfully" Jan 30 13:02:10.065931 containerd[1447]: time="2025-01-30T13:02:10.065801349Z" level=info msg="StartContainer for \"1a95aca2b6f77c0a1e853e3af45e43bdd19b8734cc0cc2aa11b30b84474de4e7\" returns successfully" Jan 30 13:02:10.114054 kubelet[2090]: W0130 13:02:10.109000 2090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 30 13:02:10.114054 kubelet[2090]: E0130 13:02:10.109076 2090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:02:10.138716 kubelet[2090]: I0130 13:02:10.132494 2090 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:02:10.138716 kubelet[2090]: E0130 13:02:10.132825 2090 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 30 13:02:10.243750 kubelet[2090]: E0130 13:02:10.243197 2090 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7a019dd1eeed default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:02:08.549637869 +0000 UTC m=+1.080500201,LastTimestamp:2025-01-30 13:02:08.549637869 +0000 UTC m=+1.080500201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:02:10.581790 kubelet[2090]: E0130 13:02:10.581596 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:10.590399 kubelet[2090]: E0130 13:02:10.590013 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:10.594381 kubelet[2090]: E0130 13:02:10.594313 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:11.596376 kubelet[2090]: E0130 13:02:11.596292 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:11.736826 kubelet[2090]: I0130 13:02:11.735043 2090 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:02:12.180997 kubelet[2090]: E0130 13:02:12.180794 2090 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:02:12.260718 kubelet[2090]: I0130 13:02:12.260672 2090 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:02:12.536921 kubelet[2090]: I0130 13:02:12.536798 2090 apiserver.go:52] "Watching apiserver" Jan 30 13:02:12.555038 kubelet[2090]: I0130 13:02:12.554982 2090 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:02:12.605684 kubelet[2090]: E0130 13:02:12.602683 2090 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:02:12.605684 kubelet[2090]: E0130 13:02:12.602896 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:12.846347 kubelet[2090]: E0130 13:02:12.846063 2090 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:02:12.846347 kubelet[2090]: E0130 13:02:12.846248 2090 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:14.462010 systemd[1]: Reloading requested from client PID 2366 ('systemctl') (unit session-7.scope)... Jan 30 13:02:14.462030 systemd[1]: Reloading... Jan 30 13:02:14.539822 zram_generator::config[2408]: No configuration found. Jan 30 13:02:14.658933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:02:14.728434 systemd[1]: Reloading finished in 265 ms. Jan 30 13:02:14.779206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:02:14.795091 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:02:14.795292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:02:14.795344 systemd[1]: kubelet.service: Consumed 1.478s CPU time, 119.8M memory peak, 0B memory swap peak. Jan 30 13:02:14.807007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:02:14.915988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:02:14.922126 (kubelet)[2447]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:02:14.963466 kubelet[2447]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:02:14.963466 kubelet[2447]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:02:14.963466 kubelet[2447]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:02:14.963849 kubelet[2447]: I0130 13:02:14.963523 2447 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:02:14.972733 kubelet[2447]: I0130 13:02:14.970404 2447 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:02:14.972733 kubelet[2447]: I0130 13:02:14.970439 2447 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:02:14.972733 kubelet[2447]: I0130 13:02:14.970726 2447 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:02:14.974305 kubelet[2447]: I0130 13:02:14.973465 2447 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:02:14.977173 kubelet[2447]: I0130 13:02:14.976393 2447 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:02:14.982357 kubelet[2447]: E0130 13:02:14.982029 2447 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:02:14.982357 kubelet[2447]: I0130 13:02:14.982104 2447 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:02:14.984696 kubelet[2447]: I0130 13:02:14.984574 2447 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:02:14.985056 kubelet[2447]: I0130 13:02:14.985021 2447 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:02:14.985163 kubelet[2447]: I0130 13:02:14.985127 2447 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:02:14.985378 kubelet[2447]: I0130 13:02:14.985160 2447 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:02:14.985468 kubelet[2447]: I0130 13:02:14.985379 2447 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:02:14.985468 kubelet[2447]: I0130 13:02:14.985388 2447 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:02:14.985468 kubelet[2447]: I0130 13:02:14.985418 2447 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:02:14.985533 kubelet[2447]: I0130 13:02:14.985524 2447 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:02:14.985556 kubelet[2447]: I0130 13:02:14.985537 2447 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:02:14.985579 kubelet[2447]: I0130 13:02:14.985558 2447 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:02:14.985579 kubelet[2447]: I0130 13:02:14.985568 2447 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:02:14.988384 kubelet[2447]: I0130 13:02:14.988263 2447 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:02:14.989355 kubelet[2447]: I0130 13:02:14.988791 2447 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:02:14.989355 kubelet[2447]: I0130 13:02:14.989220 2447 server.go:1269] "Started kubelet" Jan 30 13:02:14.989462 kubelet[2447]: I0130 13:02:14.989339 2447 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:02:14.989893 kubelet[2447]: I0130 13:02:14.989697 2447 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:02:14.990143 kubelet[2447]: I0130 13:02:14.989969 2447 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:02:14.994068 kubelet[2447]: I0130 13:02:14.992594 2447 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:02:14.994068 kubelet[2447]: I0130 13:02:14.992715 2447 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:02:14.995480 kubelet[2447]: I0130 13:02:14.995416 2447 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:02:14.997484 kubelet[2447]: E0130 13:02:14.997167 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:02:14.997484 kubelet[2447]: I0130 13:02:14.997219 2447 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:02:14.997484 kubelet[2447]: I0130 13:02:14.997422 2447 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:02:14.997651 kubelet[2447]: I0130 13:02:14.997556 2447 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:02:15.022344 kubelet[2447]: I0130 13:02:15.021225 2447 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:02:15.022344 kubelet[2447]: I0130 13:02:15.021393 2447 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:02:15.026657 kubelet[2447]: I0130 13:02:15.025670 2447 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:02:15.028110 kubelet[2447]: E0130 13:02:15.027384 2447 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:02:15.028110 kubelet[2447]: I0130 13:02:15.027760 2447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:02:15.034926 kubelet[2447]: I0130 13:02:15.034892 2447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:02:15.034926 kubelet[2447]: I0130 13:02:15.034931 2447 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:02:15.035070 kubelet[2447]: I0130 13:02:15.034949 2447 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:02:15.035070 kubelet[2447]: E0130 13:02:15.035007 2447 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:02:15.069331 kubelet[2447]: I0130 13:02:15.069261 2447 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:02:15.069331 kubelet[2447]: I0130 13:02:15.069286 2447 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:02:15.069331 kubelet[2447]: I0130 13:02:15.069310 2447 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:02:15.069509 kubelet[2447]: I0130 13:02:15.069483 2447 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:02:15.069539 kubelet[2447]: I0130 13:02:15.069498 2447 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:02:15.069539 kubelet[2447]: I0130 13:02:15.069519 2447 policy_none.go:49] "None policy: Start" Jan 30 13:02:15.070510 kubelet[2447]: I0130 13:02:15.070476 2447 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:02:15.070510 kubelet[2447]: I0130 13:02:15.070507 2447 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:02:15.070891 kubelet[2447]: I0130 13:02:15.070827 2447 state_mem.go:75] "Updated machine memory state" Jan 30 13:02:15.076752 kubelet[2447]: I0130 13:02:15.076726 2447 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:02:15.077548 kubelet[2447]: I0130 13:02:15.077094 2447 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:02:15.077548 kubelet[2447]: I0130 13:02:15.077111 2447 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:02:15.077548 kubelet[2447]: I0130 13:02:15.077336 2447 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:02:15.182731 kubelet[2447]: I0130 13:02:15.182690 2447 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:02:15.193509 kubelet[2447]: I0130 13:02:15.193294 2447 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 30 13:02:15.193509 kubelet[2447]: I0130 13:02:15.193458 2447 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:02:15.199303 kubelet[2447]: I0130 13:02:15.199152 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76952b2975cac0c5f32dbd29f04e6efe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76952b2975cac0c5f32dbd29f04e6efe\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:02:15.199303 kubelet[2447]: I0130 13:02:15.199195 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76952b2975cac0c5f32dbd29f04e6efe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76952b2975cac0c5f32dbd29f04e6efe\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:02:15.199303 kubelet[2447]: I0130 13:02:15.199217 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:15.199303 kubelet[2447]: I0130 13:02:15.199235 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:02:15.199303 kubelet[2447]: I0130 13:02:15.199250 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76952b2975cac0c5f32dbd29f04e6efe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76952b2975cac0c5f32dbd29f04e6efe\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:02:15.199534 kubelet[2447]: I0130 13:02:15.199293 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:15.199534 kubelet[2447]: I0130 13:02:15.199328 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:15.199534 kubelet[2447]: I0130 13:02:15.199346 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:15.199534 kubelet[2447]: I0130 13:02:15.199362 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:02:15.448743 kubelet[2447]: E0130 13:02:15.446597 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:15.448743 kubelet[2447]: E0130 13:02:15.447422 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:15.448743 kubelet[2447]: E0130 13:02:15.447456 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:15.986287 kubelet[2447]: I0130 13:02:15.986194 2447 apiserver.go:52] "Watching apiserver" Jan 30 13:02:15.997778 kubelet[2447]: I0130 13:02:15.997704 2447 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:02:16.049354 kubelet[2447]: E0130 13:02:16.048714 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:16.049354 kubelet[2447]: E0130 13:02:16.048836 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:16.051769 kubelet[2447]: E0130 13:02:16.051733 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:16.090513 kubelet[2447]: I0130 13:02:16.088230 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.088209589 podStartE2EDuration="1.088209589s" podCreationTimestamp="2025-01-30 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:02:16.075718829 +0000 UTC m=+1.150282961" watchObservedRunningTime="2025-01-30 13:02:16.088209589 +0000 UTC m=+1.162773561" Jan 30 13:02:16.111442 kubelet[2447]: I0130 13:02:16.110091 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.110072109 podStartE2EDuration="1.110072109s" podCreationTimestamp="2025-01-30 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:02:16.088527469 +0000 UTC m=+1.163091481" watchObservedRunningTime="2025-01-30 13:02:16.110072109 +0000 UTC m=+1.184636121" Jan 30 13:02:16.132115 kubelet[2447]: I0130 13:02:16.131957 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.131938829 podStartE2EDuration="1.131938829s" podCreationTimestamp="2025-01-30 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:02:16.111036629 +0000 UTC m=+1.185600641" watchObservedRunningTime="2025-01-30 13:02:16.131938829 +0000 UTC m=+1.206502841" Jan 30 13:02:17.050361 kubelet[2447]: E0130 13:02:17.050327 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:17.131378 kubelet[2447]: E0130 13:02:17.131327 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:19.942245 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 30 13:02:19.947399 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:19.950646 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:38646.service: Deactivated successfully. Jan 30 13:02:19.953375 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:02:19.953579 systemd[1]: session-7.scope: Consumed 6.165s CPU time, 154.2M memory peak, 0B memory swap peak. Jan 30 13:02:19.955340 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:02:19.956318 systemd-logind[1423]: Removed session 7. Jan 30 13:02:20.647154 kubelet[2447]: I0130 13:02:20.647113 2447 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:02:20.648119 containerd[1447]: time="2025-01-30T13:02:20.647457789Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:02:20.649042 kubelet[2447]: I0130 13:02:20.648488 2447 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:02:21.657792 systemd[1]: Created slice kubepods-besteffort-pod47cabf1b_b2fc_463e_a94f_b41010284ec6.slice - libcontainer container kubepods-besteffort-pod47cabf1b_b2fc_463e_a94f_b41010284ec6.slice. Jan 30 13:02:21.728602 systemd[1]: Created slice kubepods-besteffort-pod0921f5e8_a620_4a0a_8bb1_fb2b3948fb53.slice - libcontainer container kubepods-besteffort-pod0921f5e8_a620_4a0a_8bb1_fb2b3948fb53.slice. Jan 30 13:02:21.842394 kubelet[2447]: I0130 13:02:21.842349 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0921f5e8-a620-4a0a-8bb1-fb2b3948fb53-var-lib-calico\") pod \"tigera-operator-76c4976dd7-7hrw4\" (UID: \"0921f5e8-a620-4a0a-8bb1-fb2b3948fb53\") " pod="tigera-operator/tigera-operator-76c4976dd7-7hrw4" Jan 30 13:02:21.842792 kubelet[2447]: I0130 13:02:21.842400 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47cabf1b-b2fc-463e-a94f-b41010284ec6-xtables-lock\") pod \"kube-proxy-rmwvp\" (UID: \"47cabf1b-b2fc-463e-a94f-b41010284ec6\") " pod="kube-system/kube-proxy-rmwvp" Jan 30 13:02:21.842792 kubelet[2447]: I0130 13:02:21.842430 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sz6m\" (UniqueName: \"kubernetes.io/projected/0921f5e8-a620-4a0a-8bb1-fb2b3948fb53-kube-api-access-6sz6m\") pod \"tigera-operator-76c4976dd7-7hrw4\" (UID: \"0921f5e8-a620-4a0a-8bb1-fb2b3948fb53\") " pod="tigera-operator/tigera-operator-76c4976dd7-7hrw4" Jan 30 13:02:21.842792 kubelet[2447]: I0130 13:02:21.842452 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47cabf1b-b2fc-463e-a94f-b41010284ec6-kube-proxy\") pod \"kube-proxy-rmwvp\" (UID: \"47cabf1b-b2fc-463e-a94f-b41010284ec6\") " pod="kube-system/kube-proxy-rmwvp" Jan 30 13:02:21.842792 kubelet[2447]: I0130 13:02:21.842467 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47cabf1b-b2fc-463e-a94f-b41010284ec6-lib-modules\") pod \"kube-proxy-rmwvp\" (UID: \"47cabf1b-b2fc-463e-a94f-b41010284ec6\") " pod="kube-system/kube-proxy-rmwvp" Jan 30 13:02:21.842792 kubelet[2447]: I0130 13:02:21.842482 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxw8\" (UniqueName: \"kubernetes.io/projected/47cabf1b-b2fc-463e-a94f-b41010284ec6-kube-api-access-ckxw8\") pod \"kube-proxy-rmwvp\" (UID: \"47cabf1b-b2fc-463e-a94f-b41010284ec6\") " pod="kube-system/kube-proxy-rmwvp" Jan 30 13:02:21.971307 kubelet[2447]: E0130 13:02:21.971184 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:21.972362 containerd[1447]: time="2025-01-30T13:02:21.972324512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmwvp,Uid:47cabf1b-b2fc-463e-a94f-b41010284ec6,Namespace:kube-system,Attempt:0,}" Jan 30 13:02:22.006733 containerd[1447]: time="2025-01-30T13:02:22.006608146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:22.006733 containerd[1447]: time="2025-01-30T13:02:22.006691426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:22.006733 containerd[1447]: time="2025-01-30T13:02:22.006707346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:22.006911 containerd[1447]: time="2025-01-30T13:02:22.006800025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:22.030887 systemd[1]: Started cri-containerd-b6899d01f6fe5d21ec881bbee29a5717c73203620dea6755908b0591648811fb.scope - libcontainer container b6899d01f6fe5d21ec881bbee29a5717c73203620dea6755908b0591648811fb. Jan 30 13:02:22.035205 containerd[1447]: time="2025-01-30T13:02:22.035073154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-7hrw4,Uid:0921f5e8-a620-4a0a-8bb1-fb2b3948fb53,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:02:22.053287 containerd[1447]: time="2025-01-30T13:02:22.053246071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmwvp,Uid:47cabf1b-b2fc-463e-a94f-b41010284ec6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6899d01f6fe5d21ec881bbee29a5717c73203620dea6755908b0591648811fb\"" Jan 30 13:02:22.055519 kubelet[2447]: E0130 13:02:22.054293 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:22.059381 containerd[1447]: time="2025-01-30T13:02:22.059107352Z" level=info msg="CreateContainer within sandbox \"b6899d01f6fe5d21ec881bbee29a5717c73203620dea6755908b0591648811fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:02:22.061858 containerd[1447]: time="2025-01-30T13:02:22.061091458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:22.061858 containerd[1447]: time="2025-01-30T13:02:22.061682854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:22.062084 containerd[1447]: time="2025-01-30T13:02:22.061868373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:22.062084 containerd[1447]: time="2025-01-30T13:02:22.061990172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:22.081852 systemd[1]: Started cri-containerd-c1efb36fc2eec03e66a523a7773482819414c5bc2645c813a4f6a960bb3fdaff.scope - libcontainer container c1efb36fc2eec03e66a523a7773482819414c5bc2645c813a4f6a960bb3fdaff. Jan 30 13:02:22.088763 containerd[1447]: time="2025-01-30T13:02:22.088711912Z" level=info msg="CreateContainer within sandbox \"b6899d01f6fe5d21ec881bbee29a5717c73203620dea6755908b0591648811fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bfb5889b78f8ad5a316ea5989ceb251874f929736174e8ec8f44a6322b6c23d5\"" Jan 30 13:02:22.089471 containerd[1447]: time="2025-01-30T13:02:22.089409107Z" level=info msg="StartContainer for \"bfb5889b78f8ad5a316ea5989ceb251874f929736174e8ec8f44a6322b6c23d5\"" Jan 30 13:02:22.115308 containerd[1447]: time="2025-01-30T13:02:22.115270892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-7hrw4,Uid:0921f5e8-a620-4a0a-8bb1-fb2b3948fb53,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c1efb36fc2eec03e66a523a7773482819414c5bc2645c813a4f6a960bb3fdaff\"" Jan 30 13:02:22.117313 containerd[1447]: time="2025-01-30T13:02:22.117240799Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:02:22.121888 systemd[1]: Started cri-containerd-bfb5889b78f8ad5a316ea5989ceb251874f929736174e8ec8f44a6322b6c23d5.scope - libcontainer container bfb5889b78f8ad5a316ea5989ceb251874f929736174e8ec8f44a6322b6c23d5. Jan 30 13:02:22.146549 containerd[1447]: time="2025-01-30T13:02:22.146497241Z" level=info msg="StartContainer for \"bfb5889b78f8ad5a316ea5989ceb251874f929736174e8ec8f44a6322b6c23d5\" returns successfully" Jan 30 13:02:22.388911 kubelet[2447]: E0130 13:02:22.388751 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:22.783231 kubelet[2447]: E0130 13:02:22.783060 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:23.064934 kubelet[2447]: E0130 13:02:23.064805 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:23.068252 kubelet[2447]: E0130 13:02:23.068122 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:23.068737 kubelet[2447]: E0130 13:02:23.068716 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:23.097002 kubelet[2447]: I0130 13:02:23.096945 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rmwvp" podStartSLOduration=2.096925901 podStartE2EDuration="2.096925901s" podCreationTimestamp="2025-01-30 13:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:02:23.07660915 +0000 UTC m=+8.151173162" watchObservedRunningTime="2025-01-30 13:02:23.096925901 +0000 UTC m=+8.171489913" Jan 30 13:02:23.209926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934493098.mount: Deactivated successfully. Jan 30 13:02:23.738656 containerd[1447]: time="2025-01-30T13:02:23.738132400Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:23.739129 containerd[1447]: time="2025-01-30T13:02:23.739092314Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 13:02:23.740133 containerd[1447]: time="2025-01-30T13:02:23.740057668Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:23.754355 containerd[1447]: time="2025-01-30T13:02:23.754298458Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:23.755298 containerd[1447]: time="2025-01-30T13:02:23.755012613Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.637732374s" Jan 30 13:02:23.755298 containerd[1447]: time="2025-01-30T13:02:23.755045373Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 13:02:23.760064 containerd[1447]: time="2025-01-30T13:02:23.759999422Z" level=info msg="CreateContainer within sandbox \"c1efb36fc2eec03e66a523a7773482819414c5bc2645c813a4f6a960bb3fdaff\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:02:23.773971 containerd[1447]: time="2025-01-30T13:02:23.773917773Z" level=info msg="CreateContainer within sandbox \"c1efb36fc2eec03e66a523a7773482819414c5bc2645c813a4f6a960bb3fdaff\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"335787a20898da538e092b9ef68fa9b75606c68dd9effb0ca0b794168d299878\"" Jan 30 13:02:23.774609 containerd[1447]: time="2025-01-30T13:02:23.774572489Z" level=info msg="StartContainer for \"335787a20898da538e092b9ef68fa9b75606c68dd9effb0ca0b794168d299878\"" Jan 30 13:02:23.808022 systemd[1]: Started cri-containerd-335787a20898da538e092b9ef68fa9b75606c68dd9effb0ca0b794168d299878.scope - libcontainer container 335787a20898da538e092b9ef68fa9b75606c68dd9effb0ca0b794168d299878. Jan 30 13:02:23.837052 containerd[1447]: time="2025-01-30T13:02:23.836442497Z" level=info msg="StartContainer for \"335787a20898da538e092b9ef68fa9b75606c68dd9effb0ca0b794168d299878\" returns successfully" Jan 30 13:02:24.070927 kubelet[2447]: E0130 13:02:24.070829 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:24.071391 kubelet[2447]: E0130 13:02:24.070978 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:24.087922 kubelet[2447]: I0130 13:02:24.087691 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-7hrw4" podStartSLOduration=1.445849631 podStartE2EDuration="3.08767578s" podCreationTimestamp="2025-01-30 13:02:21 +0000 UTC" firstStartedPulling="2025-01-30 13:02:22.116751602 +0000 UTC m=+7.191315574" lastFinishedPulling="2025-01-30 13:02:23.758577711 +0000 UTC m=+8.833141723" observedRunningTime="2025-01-30 13:02:24.087111624 +0000 UTC m=+9.161675636" watchObservedRunningTime="2025-01-30 13:02:24.08767578 +0000 UTC m=+9.162239792" Jan 30 13:02:27.140125 kubelet[2447]: E0130 13:02:27.140091 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:28.361534 systemd[1]: Created slice kubepods-besteffort-pod7d68b78e_efaf_4dc5_aebe_919266801fb5.slice - libcontainer container kubepods-besteffort-pod7d68b78e_efaf_4dc5_aebe_919266801fb5.slice. Jan 30 13:02:28.490047 kubelet[2447]: I0130 13:02:28.489985 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d68b78e-efaf-4dc5-aebe-919266801fb5-typha-certs\") pod \"calico-typha-7848884cf4-pht2f\" (UID: \"7d68b78e-efaf-4dc5-aebe-919266801fb5\") " pod="calico-system/calico-typha-7848884cf4-pht2f" Jan 30 13:02:28.490047 kubelet[2447]: I0130 13:02:28.490035 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxkrs\" (UniqueName: \"kubernetes.io/projected/7d68b78e-efaf-4dc5-aebe-919266801fb5-kube-api-access-vxkrs\") pod \"calico-typha-7848884cf4-pht2f\" (UID: \"7d68b78e-efaf-4dc5-aebe-919266801fb5\") " pod="calico-system/calico-typha-7848884cf4-pht2f" Jan 30 13:02:28.490047 kubelet[2447]: I0130 13:02:28.490056 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d68b78e-efaf-4dc5-aebe-919266801fb5-tigera-ca-bundle\") pod \"calico-typha-7848884cf4-pht2f\" (UID: \"7d68b78e-efaf-4dc5-aebe-919266801fb5\") " pod="calico-system/calico-typha-7848884cf4-pht2f" Jan 30 13:02:28.544998 systemd[1]: Created slice kubepods-besteffort-podab53fc82_a6ed_4d0e_9410_b14b2135cdae.slice - libcontainer container kubepods-besteffort-podab53fc82_a6ed_4d0e_9410_b14b2135cdae.slice. Jan 30 13:02:28.669463 kubelet[2447]: E0130 13:02:28.669157 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:28.669758 containerd[1447]: time="2025-01-30T13:02:28.669711572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7848884cf4-pht2f,Uid:7d68b78e-efaf-4dc5-aebe-919266801fb5,Namespace:calico-system,Attempt:0,}" Jan 30 13:02:28.691661 kubelet[2447]: I0130 13:02:28.691557 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-xtables-lock\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.691661 kubelet[2447]: I0130 13:02:28.691672 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-policysync\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.691815 kubelet[2447]: I0130 13:02:28.691695 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-var-lib-calico\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.691815 kubelet[2447]: I0130 13:02:28.691741 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-cni-net-dir\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.691815 kubelet[2447]: I0130 13:02:28.691760 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-tigera-ca-bundle\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.691996 kubelet[2447]: I0130 13:02:28.691775 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-cni-bin-dir\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.691996 kubelet[2447]: I0130 13:02:28.691936 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-lib-modules\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.691996 kubelet[2447]: I0130 13:02:28.691982 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-node-certs\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.692067 kubelet[2447]: I0130 13:02:28.692005 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6gs\" (UniqueName: \"kubernetes.io/projected/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-kube-api-access-5d6gs\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.692067 kubelet[2447]: I0130 13:02:28.692023 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-var-run-calico\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.692115 kubelet[2447]: I0130 13:02:28.692084 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-cni-log-dir\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.692144 kubelet[2447]: I0130 13:02:28.692132 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab53fc82-a6ed-4d0e-9410-b14b2135cdae-flexvol-driver-host\") pod \"calico-node-rl5fw\" (UID: \"ab53fc82-a6ed-4d0e-9410-b14b2135cdae\") " pod="calico-system/calico-node-rl5fw" Jan 30 13:02:28.702678 containerd[1447]: time="2025-01-30T13:02:28.702505502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:28.702678 containerd[1447]: time="2025-01-30T13:02:28.702578741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:28.703391 containerd[1447]: time="2025-01-30T13:02:28.702735941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:28.703391 containerd[1447]: time="2025-01-30T13:02:28.703313058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:28.729598 systemd[1]: Started cri-containerd-f2b9a202887960fe55effcf7b3c9b38baec86f58dcdfff2d3d65fb0d85d7eb8a.scope - libcontainer container f2b9a202887960fe55effcf7b3c9b38baec86f58dcdfff2d3d65fb0d85d7eb8a. Jan 30 13:02:28.738248 kubelet[2447]: E0130 13:02:28.738072 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slr48" podUID="e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9" Jan 30 13:02:28.783589 containerd[1447]: time="2025-01-30T13:02:28.783528250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7848884cf4-pht2f,Uid:7d68b78e-efaf-4dc5-aebe-919266801fb5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2b9a202887960fe55effcf7b3c9b38baec86f58dcdfff2d3d65fb0d85d7eb8a\"" Jan 30 13:02:28.803562 kubelet[2447]: E0130 13:02:28.803494 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:28.804642 containerd[1447]: time="2025-01-30T13:02:28.804600833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:02:28.811585 kubelet[2447]: E0130 13:02:28.811366 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.811585 kubelet[2447]: W0130 13:02:28.811405 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.811810 kubelet[2447]: E0130 13:02:28.811784 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.811810 kubelet[2447]: W0130 13:02:28.811809 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.811865 kubelet[2447]: E0130 13:02:28.811826 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.811931 kubelet[2447]: E0130 13:02:28.811790 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.812042 kubelet[2447]: E0130 13:02:28.812029 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.812042 kubelet[2447]: W0130 13:02:28.812041 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.812105 kubelet[2447]: E0130 13:02:28.812050 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.812232 kubelet[2447]: E0130 13:02:28.812185 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.812232 kubelet[2447]: W0130 13:02:28.812196 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.812232 kubelet[2447]: E0130 13:02:28.812210 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.812337 kubelet[2447]: E0130 13:02:28.812325 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.812337 kubelet[2447]: W0130 13:02:28.812334 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.812432 kubelet[2447]: E0130 13:02:28.812364 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.812476 kubelet[2447]: E0130 13:02:28.812460 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.812476 kubelet[2447]: W0130 13:02:28.812469 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.812476 kubelet[2447]: E0130 13:02:28.812481 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.813352 kubelet[2447]: E0130 13:02:28.813323 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.813352 kubelet[2447]: W0130 13:02:28.813340 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.813462 kubelet[2447]: E0130 13:02:28.813359 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.814194 kubelet[2447]: E0130 13:02:28.814168 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.814194 kubelet[2447]: W0130 13:02:28.814186 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.814564 kubelet[2447]: E0130 13:02:28.814394 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.814940 kubelet[2447]: E0130 13:02:28.814757 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.814940 kubelet[2447]: W0130 13:02:28.814773 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.814940 kubelet[2447]: E0130 13:02:28.814812 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.817032 kubelet[2447]: E0130 13:02:28.817003 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.817032 kubelet[2447]: W0130 13:02:28.817026 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.817294 kubelet[2447]: E0130 13:02:28.817052 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.817906 kubelet[2447]: E0130 13:02:28.817435 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.817906 kubelet[2447]: W0130 13:02:28.817451 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.817906 kubelet[2447]: E0130 13:02:28.817470 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.848976 kubelet[2447]: E0130 13:02:28.848425 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:28.850911 containerd[1447]: time="2025-01-30T13:02:28.850709502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rl5fw,Uid:ab53fc82-a6ed-4d0e-9410-b14b2135cdae,Namespace:calico-system,Attempt:0,}" Jan 30 13:02:28.876497 containerd[1447]: time="2025-01-30T13:02:28.876387104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:28.876874 containerd[1447]: time="2025-01-30T13:02:28.876832662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:28.876874 containerd[1447]: time="2025-01-30T13:02:28.876857102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:28.877006 containerd[1447]: time="2025-01-30T13:02:28.876979221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:28.896828 kubelet[2447]: E0130 13:02:28.896806 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.897372 kubelet[2447]: W0130 13:02:28.897354 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.897529 kubelet[2447]: E0130 13:02:28.897494 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.897687 kubelet[2447]: I0130 13:02:28.897671 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9-registration-dir\") pod \"csi-node-driver-slr48\" (UID: \"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9\") " pod="calico-system/csi-node-driver-slr48" Jan 30 13:02:28.898031 kubelet[2447]: E0130 13:02:28.898017 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.898127 kubelet[2447]: W0130 13:02:28.898101 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.898194 kubelet[2447]: E0130 13:02:28.898183 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.898339 kubelet[2447]: I0130 13:02:28.898325 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9-kubelet-dir\") pod \"csi-node-driver-slr48\" (UID: \"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9\") " pod="calico-system/csi-node-driver-slr48" Jan 30 13:02:28.898574 kubelet[2447]: E0130 13:02:28.898561 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.898727 kubelet[2447]: W0130 13:02:28.898661 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.898727 kubelet[2447]: E0130 13:02:28.898684 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.898828 systemd[1]: Started cri-containerd-fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75.scope - libcontainer container fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75. Jan 30 13:02:28.899239 kubelet[2447]: E0130 13:02:28.899148 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.899239 kubelet[2447]: W0130 13:02:28.899162 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.899239 kubelet[2447]: E0130 13:02:28.899180 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.900551 kubelet[2447]: E0130 13:02:28.900428 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.900551 kubelet[2447]: W0130 13:02:28.900445 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.900551 kubelet[2447]: E0130 13:02:28.900460 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.901035 kubelet[2447]: E0130 13:02:28.900993 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.901035 kubelet[2447]: W0130 13:02:28.901012 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.901283 kubelet[2447]: E0130 13:02:28.901251 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.902361 kubelet[2447]: E0130 13:02:28.902244 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.902361 kubelet[2447]: W0130 13:02:28.902260 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.902361 kubelet[2447]: E0130 13:02:28.902272 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.902361 kubelet[2447]: I0130 13:02:28.902302 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9-socket-dir\") pod \"csi-node-driver-slr48\" (UID: \"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9\") " pod="calico-system/csi-node-driver-slr48" Jan 30 13:02:28.902800 kubelet[2447]: E0130 13:02:28.902768 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.902800 kubelet[2447]: W0130 13:02:28.902786 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.902964 kubelet[2447]: E0130 13:02:28.902904 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.902964 kubelet[2447]: I0130 13:02:28.902928 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztlb\" (UniqueName: \"kubernetes.io/projected/e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9-kube-api-access-pztlb\") pod \"csi-node-driver-slr48\" (UID: \"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9\") " pod="calico-system/csi-node-driver-slr48" Jan 30 13:02:28.903341 kubelet[2447]: E0130 13:02:28.903274 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.903341 kubelet[2447]: W0130 13:02:28.903286 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.903341 kubelet[2447]: E0130 13:02:28.903304 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.903724 kubelet[2447]: E0130 13:02:28.903612 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.903724 kubelet[2447]: W0130 13:02:28.903659 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.903724 kubelet[2447]: E0130 13:02:28.903681 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.904096 kubelet[2447]: E0130 13:02:28.904005 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.904096 kubelet[2447]: W0130 13:02:28.904018 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.904096 kubelet[2447]: E0130 13:02:28.904035 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.904096 kubelet[2447]: I0130 13:02:28.904052 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9-varrun\") pod \"csi-node-driver-slr48\" (UID: \"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9\") " pod="calico-system/csi-node-driver-slr48" Jan 30 13:02:28.904447 kubelet[2447]: E0130 13:02:28.904405 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.904447 kubelet[2447]: W0130 13:02:28.904418 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.904516 kubelet[2447]: E0130 13:02:28.904449 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.905658 kubelet[2447]: E0130 13:02:28.904739 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.905658 kubelet[2447]: W0130 13:02:28.904760 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.905658 kubelet[2447]: E0130 13:02:28.904921 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.906337 kubelet[2447]: E0130 13:02:28.906133 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.906337 kubelet[2447]: W0130 13:02:28.906153 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.906337 kubelet[2447]: E0130 13:02:28.906168 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.912018 kubelet[2447]: E0130 13:02:28.911974 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:28.912018 kubelet[2447]: W0130 13:02:28.912009 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:28.912018 kubelet[2447]: E0130 13:02:28.912035 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:28.926772 containerd[1447]: time="2025-01-30T13:02:28.926498834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rl5fw,Uid:ab53fc82-a6ed-4d0e-9410-b14b2135cdae,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75\"" Jan 30 13:02:28.929587 kubelet[2447]: E0130 13:02:28.929526 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:29.005053 kubelet[2447]: E0130 13:02:29.005010 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.005053 kubelet[2447]: W0130 13:02:29.005037 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.005053 kubelet[2447]: E0130 13:02:29.005058 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.005256 kubelet[2447]: E0130 13:02:29.005238 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.005256 kubelet[2447]: W0130 13:02:29.005250 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.005315 kubelet[2447]: E0130 13:02:29.005274 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.005479 kubelet[2447]: E0130 13:02:29.005458 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.005479 kubelet[2447]: W0130 13:02:29.005470 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.005479 kubelet[2447]: E0130 13:02:29.005481 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.005671 kubelet[2447]: E0130 13:02:29.005657 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.005671 kubelet[2447]: W0130 13:02:29.005668 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.005729 kubelet[2447]: E0130 13:02:29.005679 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.005870 kubelet[2447]: E0130 13:02:29.005844 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.005870 kubelet[2447]: W0130 13:02:29.005856 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.005870 kubelet[2447]: E0130 13:02:29.005864 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.006053 kubelet[2447]: E0130 13:02:29.006029 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.006053 kubelet[2447]: W0130 13:02:29.006040 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.006053 kubelet[2447]: E0130 13:02:29.006052 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.006203 kubelet[2447]: E0130 13:02:29.006191 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.006203 kubelet[2447]: W0130 13:02:29.006201 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.006250 kubelet[2447]: E0130 13:02:29.006210 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.006338 kubelet[2447]: E0130 13:02:29.006328 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.006338 kubelet[2447]: W0130 13:02:29.006337 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.006412 kubelet[2447]: E0130 13:02:29.006384 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.006511 kubelet[2447]: E0130 13:02:29.006499 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.006549 kubelet[2447]: W0130 13:02:29.006510 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.006549 kubelet[2447]: E0130 13:02:29.006532 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.006683 kubelet[2447]: E0130 13:02:29.006670 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.006683 kubelet[2447]: W0130 13:02:29.006681 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.006760 kubelet[2447]: E0130 13:02:29.006751 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.006822 kubelet[2447]: E0130 13:02:29.006810 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.006822 kubelet[2447]: W0130 13:02:29.006820 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.006893 kubelet[2447]: E0130 13:02:29.006884 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.006959 kubelet[2447]: E0130 13:02:29.006946 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.006959 kubelet[2447]: W0130 13:02:29.006956 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.007090 kubelet[2447]: E0130 13:02:29.007042 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.007128 kubelet[2447]: E0130 13:02:29.007122 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.007170 kubelet[2447]: W0130 13:02:29.007129 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.007170 kubelet[2447]: E0130 13:02:29.007144 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.007305 kubelet[2447]: E0130 13:02:29.007291 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.007305 kubelet[2447]: W0130 13:02:29.007302 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.007392 kubelet[2447]: E0130 13:02:29.007314 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.007454 kubelet[2447]: E0130 13:02:29.007443 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.007454 kubelet[2447]: W0130 13:02:29.007453 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.007509 kubelet[2447]: E0130 13:02:29.007461 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.007741 kubelet[2447]: E0130 13:02:29.007727 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.007741 kubelet[2447]: W0130 13:02:29.007740 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.007810 kubelet[2447]: E0130 13:02:29.007757 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.008000 kubelet[2447]: E0130 13:02:29.007985 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.008029 kubelet[2447]: W0130 13:02:29.008000 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.008029 kubelet[2447]: E0130 13:02:29.008017 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.008178 kubelet[2447]: E0130 13:02:29.008166 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.008178 kubelet[2447]: W0130 13:02:29.008177 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.008230 kubelet[2447]: E0130 13:02:29.008218 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.008374 kubelet[2447]: E0130 13:02:29.008345 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.008374 kubelet[2447]: W0130 13:02:29.008358 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.008433 kubelet[2447]: E0130 13:02:29.008398 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.008525 kubelet[2447]: E0130 13:02:29.008501 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.008525 kubelet[2447]: W0130 13:02:29.008511 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.008586 kubelet[2447]: E0130 13:02:29.008577 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.008706 kubelet[2447]: E0130 13:02:29.008692 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.008706 kubelet[2447]: W0130 13:02:29.008702 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.008772 kubelet[2447]: E0130 13:02:29.008716 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.008917 kubelet[2447]: E0130 13:02:29.008884 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.008917 kubelet[2447]: W0130 13:02:29.008897 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.008917 kubelet[2447]: E0130 13:02:29.008909 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.009357 kubelet[2447]: E0130 13:02:29.009197 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.009357 kubelet[2447]: W0130 13:02:29.009213 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.009357 kubelet[2447]: E0130 13:02:29.009241 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.009558 kubelet[2447]: E0130 13:02:29.009534 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.009647 kubelet[2447]: W0130 13:02:29.009605 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.009708 kubelet[2447]: E0130 13:02:29.009695 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.085004 kubelet[2447]: E0130 13:02:29.084953 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.085004 kubelet[2447]: W0130 13:02:29.084986 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.085004 kubelet[2447]: E0130 13:02:29.085005 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.093950 kubelet[2447]: E0130 13:02:29.093846 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:29.093950 kubelet[2447]: W0130 13:02:29.093872 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:29.093950 kubelet[2447]: E0130 13:02:29.093891 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:29.626265 update_engine[1428]: I20250130 13:02:29.625653 1428 update_attempter.cc:509] Updating boot flags... Jan 30 13:02:29.657364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2985) Jan 30 13:02:29.697844 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2986) Jan 30 13:02:29.995774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255318537.mount: Deactivated successfully. Jan 30 13:02:30.046575 kubelet[2447]: E0130 13:02:30.046432 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slr48" podUID="e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9" Jan 30 13:02:30.999593 containerd[1447]: time="2025-01-30T13:02:30.999541127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:31.000495 containerd[1447]: time="2025-01-30T13:02:31.000359604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 13:02:31.010510 containerd[1447]: time="2025-01-30T13:02:31.009375729Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:31.013695 containerd[1447]: time="2025-01-30T13:02:31.013589594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:31.014682 containerd[1447]: time="2025-01-30T13:02:31.014647470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.209980997s" Jan 30 13:02:31.014848 containerd[1447]: time="2025-01-30T13:02:31.014756629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 13:02:31.016483 containerd[1447]: time="2025-01-30T13:02:31.016445143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:02:31.036665 containerd[1447]: time="2025-01-30T13:02:31.036292468Z" level=info msg="CreateContainer within sandbox \"f2b9a202887960fe55effcf7b3c9b38baec86f58dcdfff2d3d65fb0d85d7eb8a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:02:31.055013 containerd[1447]: time="2025-01-30T13:02:31.054955317Z" level=info msg="CreateContainer within sandbox \"f2b9a202887960fe55effcf7b3c9b38baec86f58dcdfff2d3d65fb0d85d7eb8a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3677f7d4748b0f428b14813704fe6d1834c9b9ffdc286ba81b6ad5688b3e04ab\"" Jan 30 13:02:31.055695 containerd[1447]: time="2025-01-30T13:02:31.055504675Z" level=info msg="StartContainer for \"3677f7d4748b0f428b14813704fe6d1834c9b9ffdc286ba81b6ad5688b3e04ab\"" Jan 30 13:02:31.085841 systemd[1]: Started cri-containerd-3677f7d4748b0f428b14813704fe6d1834c9b9ffdc286ba81b6ad5688b3e04ab.scope - libcontainer container 3677f7d4748b0f428b14813704fe6d1834c9b9ffdc286ba81b6ad5688b3e04ab. Jan 30 13:02:31.118933 containerd[1447]: time="2025-01-30T13:02:31.118875796Z" level=info msg="StartContainer for \"3677f7d4748b0f428b14813704fe6d1834c9b9ffdc286ba81b6ad5688b3e04ab\" returns successfully" Jan 30 13:02:32.035808 kubelet[2447]: E0130 13:02:32.035750 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slr48" podUID="e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9" Jan 30 13:02:32.103486 kubelet[2447]: E0130 13:02:32.103426 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:32.133206 kubelet[2447]: I0130 13:02:32.133138 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7848884cf4-pht2f" podStartSLOduration=1.921212645 podStartE2EDuration="4.133121514s" podCreationTimestamp="2025-01-30 13:02:28 +0000 UTC" firstStartedPulling="2025-01-30 13:02:28.804317075 +0000 UTC m=+13.878881087" lastFinishedPulling="2025-01-30 13:02:31.016225984 +0000 UTC m=+16.090789956" observedRunningTime="2025-01-30 13:02:32.132071797 +0000 UTC m=+17.206635769" watchObservedRunningTime="2025-01-30 13:02:32.133121514 +0000 UTC m=+17.207685526" Jan 30 13:02:32.146403 kubelet[2447]: E0130 13:02:32.144764 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.146403 kubelet[2447]: W0130 13:02:32.144792 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.146403 kubelet[2447]: E0130 13:02:32.144815 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.146403 kubelet[2447]: E0130 13:02:32.145121 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.146403 kubelet[2447]: W0130 13:02:32.145131 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.146403 kubelet[2447]: E0130 13:02:32.145144 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.146403 kubelet[2447]: E0130 13:02:32.145389 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.146403 kubelet[2447]: W0130 13:02:32.145450 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.146403 kubelet[2447]: E0130 13:02:32.145478 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.146403 kubelet[2447]: E0130 13:02:32.145734 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.147092 kubelet[2447]: W0130 13:02:32.145743 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.147092 kubelet[2447]: E0130 13:02:32.145752 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.147092 kubelet[2447]: E0130 13:02:32.146024 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.147092 kubelet[2447]: W0130 13:02:32.146041 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.147092 kubelet[2447]: E0130 13:02:32.146051 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.147092 kubelet[2447]: E0130 13:02:32.146265 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.147092 kubelet[2447]: W0130 13:02:32.146275 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.147092 kubelet[2447]: E0130 13:02:32.146283 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.147092 kubelet[2447]: E0130 13:02:32.146484 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.147092 kubelet[2447]: W0130 13:02:32.146501 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.147401 kubelet[2447]: E0130 13:02:32.146510 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.147401 kubelet[2447]: E0130 13:02:32.146754 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.147401 kubelet[2447]: W0130 13:02:32.146764 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.147401 kubelet[2447]: E0130 13:02:32.146772 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.147401 kubelet[2447]: E0130 13:02:32.147110 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.147401 kubelet[2447]: W0130 13:02:32.147121 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.147401 kubelet[2447]: E0130 13:02:32.147130 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.147401 kubelet[2447]: E0130 13:02:32.147328 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.147401 kubelet[2447]: W0130 13:02:32.147341 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.147692 kubelet[2447]: E0130 13:02:32.147405 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.147692 kubelet[2447]: E0130 13:02:32.147684 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.147733 kubelet[2447]: W0130 13:02:32.147695 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.147733 kubelet[2447]: E0130 13:02:32.147705 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.150143 kubelet[2447]: E0130 13:02:32.148028 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.150143 kubelet[2447]: W0130 13:02:32.148080 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.150143 kubelet[2447]: E0130 13:02:32.148093 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.150143 kubelet[2447]: E0130 13:02:32.148322 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.150143 kubelet[2447]: W0130 13:02:32.148332 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.150143 kubelet[2447]: E0130 13:02:32.148340 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.150143 kubelet[2447]: E0130 13:02:32.148614 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.150143 kubelet[2447]: W0130 13:02:32.148658 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.150143 kubelet[2447]: E0130 13:02:32.148770 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.150143 kubelet[2447]: E0130 13:02:32.149000 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.151302 kubelet[2447]: W0130 13:02:32.149010 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.151302 kubelet[2447]: E0130 13:02:32.149019 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.232127 kubelet[2447]: E0130 13:02:32.232085 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.232127 kubelet[2447]: W0130 13:02:32.232118 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.232448 kubelet[2447]: E0130 13:02:32.232143 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.232448 kubelet[2447]: E0130 13:02:32.232392 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.232448 kubelet[2447]: W0130 13:02:32.232401 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.232448 kubelet[2447]: E0130 13:02:32.232418 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.232663 kubelet[2447]: E0130 13:02:32.232650 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.232663 kubelet[2447]: W0130 13:02:32.232662 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.232730 kubelet[2447]: E0130 13:02:32.232678 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.232912 kubelet[2447]: E0130 13:02:32.232900 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.232912 kubelet[2447]: W0130 13:02:32.232911 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.232982 kubelet[2447]: E0130 13:02:32.232927 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.233100 kubelet[2447]: E0130 13:02:32.233090 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.233100 kubelet[2447]: W0130 13:02:32.233100 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.233161 kubelet[2447]: E0130 13:02:32.233112 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.233362 kubelet[2447]: E0130 13:02:32.233252 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.233362 kubelet[2447]: W0130 13:02:32.233259 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.233362 kubelet[2447]: E0130 13:02:32.233271 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.233464 kubelet[2447]: E0130 13:02:32.233442 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.233464 kubelet[2447]: W0130 13:02:32.233453 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.233560 kubelet[2447]: E0130 13:02:32.233488 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.233645 kubelet[2447]: E0130 13:02:32.233615 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.233645 kubelet[2447]: W0130 13:02:32.233634 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.233746 kubelet[2447]: E0130 13:02:32.233661 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.233812 kubelet[2447]: E0130 13:02:32.233801 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.233812 kubelet[2447]: W0130 13:02:32.233812 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.233914 kubelet[2447]: E0130 13:02:32.233826 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.233993 kubelet[2447]: E0130 13:02:32.233982 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.233993 kubelet[2447]: W0130 13:02:32.233993 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.234186 kubelet[2447]: E0130 13:02:32.234005 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.234186 kubelet[2447]: E0130 13:02:32.234167 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.234186 kubelet[2447]: W0130 13:02:32.234174 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.234186 kubelet[2447]: E0130 13:02:32.234181 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.234367 kubelet[2447]: E0130 13:02:32.234356 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.234403 kubelet[2447]: W0130 13:02:32.234368 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.234403 kubelet[2447]: E0130 13:02:32.234386 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.234774 kubelet[2447]: E0130 13:02:32.234758 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.234774 kubelet[2447]: W0130 13:02:32.234770 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.234774 kubelet[2447]: E0130 13:02:32.234784 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.234999 kubelet[2447]: E0130 13:02:32.234958 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.234999 kubelet[2447]: W0130 13:02:32.234967 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.235086 kubelet[2447]: E0130 13:02:32.235066 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.235313 kubelet[2447]: E0130 13:02:32.235295 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.235365 kubelet[2447]: W0130 13:02:32.235308 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.235446 kubelet[2447]: E0130 13:02:32.235380 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.235642 kubelet[2447]: E0130 13:02:32.235586 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.235642 kubelet[2447]: W0130 13:02:32.235605 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.235642 kubelet[2447]: E0130 13:02:32.235630 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.236004 kubelet[2447]: E0130 13:02:32.235985 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.236004 kubelet[2447]: W0130 13:02:32.236002 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.236075 kubelet[2447]: E0130 13:02:32.236022 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.236373 kubelet[2447]: E0130 13:02:32.236352 2447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:02:32.236373 kubelet[2447]: W0130 13:02:32.236366 2447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:02:32.236449 kubelet[2447]: E0130 13:02:32.236377 2447 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:02:32.439826 containerd[1447]: time="2025-01-30T13:02:32.439582788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:32.440949 containerd[1447]: time="2025-01-30T13:02:32.440749104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 13:02:32.441995 containerd[1447]: time="2025-01-30T13:02:32.441726780Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:32.449497 containerd[1447]: time="2025-01-30T13:02:32.448977195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:32.449654 containerd[1447]: time="2025-01-30T13:02:32.449580313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.43309073s" Jan 30 13:02:32.449654 containerd[1447]: time="2025-01-30T13:02:32.449634392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 13:02:32.453263 containerd[1447]: time="2025-01-30T13:02:32.453218460Z" level=info msg="CreateContainer within sandbox \"fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:02:32.473250 containerd[1447]: time="2025-01-30T13:02:32.473194389Z" level=info msg="CreateContainer within sandbox \"fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0\"" Jan 30 13:02:32.473794 containerd[1447]: time="2025-01-30T13:02:32.473757587Z" level=info msg="StartContainer for \"d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0\"" Jan 30 13:02:32.517871 systemd[1]: Started cri-containerd-d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0.scope - libcontainer container d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0. Jan 30 13:02:32.554573 containerd[1447]: time="2025-01-30T13:02:32.554517181Z" level=info msg="StartContainer for \"d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0\" returns successfully" Jan 30 13:02:32.624597 systemd[1]: cri-containerd-d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0.scope: Deactivated successfully. Jan 30 13:02:32.656531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0-rootfs.mount: Deactivated successfully. Jan 30 13:02:32.675438 containerd[1447]: time="2025-01-30T13:02:32.675374633Z" level=info msg="shim disconnected" id=d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0 namespace=k8s.io Jan 30 13:02:32.675869 containerd[1447]: time="2025-01-30T13:02:32.675678151Z" level=warning msg="cleaning up after shim disconnected" id=d0fae437cd0cf1d409a3164c006531f852205315af5a9bf0fb4f22353bfda1e0 namespace=k8s.io Jan 30 13:02:32.675869 containerd[1447]: time="2025-01-30T13:02:32.675697751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:02:33.113281 kubelet[2447]: I0130 13:02:33.112868 2447 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:02:33.113281 kubelet[2447]: E0130 13:02:33.113219 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:33.115379 kubelet[2447]: E0130 13:02:33.115336 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:33.119022 containerd[1447]: time="2025-01-30T13:02:33.118988367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:02:34.035365 kubelet[2447]: E0130 13:02:34.035307 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slr48" podUID="e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9" Jan 30 13:02:35.830579 containerd[1447]: time="2025-01-30T13:02:35.830507781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:35.831632 containerd[1447]: time="2025-01-30T13:02:35.831302939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 13:02:35.832550 containerd[1447]: time="2025-01-30T13:02:35.832481016Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:35.834811 containerd[1447]: time="2025-01-30T13:02:35.834723409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:35.835650 containerd[1447]: time="2025-01-30T13:02:35.835329727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.71630308s" Jan 30 13:02:35.835650 containerd[1447]: time="2025-01-30T13:02:35.835365847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 13:02:35.838282 containerd[1447]: time="2025-01-30T13:02:35.838239279Z" level=info msg="CreateContainer within sandbox \"fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:02:35.859389 containerd[1447]: time="2025-01-30T13:02:35.859343097Z" level=info msg="CreateContainer within sandbox \"fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc\"" Jan 30 13:02:35.861690 containerd[1447]: time="2025-01-30T13:02:35.860185935Z" level=info msg="StartContainer for \"95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc\"" Jan 30 13:02:35.891844 systemd[1]: Started cri-containerd-95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc.scope - libcontainer container 95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc. Jan 30 13:02:35.921380 containerd[1447]: time="2025-01-30T13:02:35.921180317Z" level=info msg="StartContainer for \"95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc\" returns successfully" Jan 30 13:02:36.035252 kubelet[2447]: E0130 13:02:36.035196 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slr48" podUID="e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9" Jan 30 13:02:36.123272 kubelet[2447]: E0130 13:02:36.122853 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:36.680807 systemd[1]: cri-containerd-95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc.scope: Deactivated successfully. Jan 30 13:02:36.701655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc-rootfs.mount: Deactivated successfully. Jan 30 13:02:36.722888 kubelet[2447]: I0130 13:02:36.722852 2447 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:02:36.776654 containerd[1447]: time="2025-01-30T13:02:36.776207562Z" level=info msg="shim disconnected" id=95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc namespace=k8s.io Jan 30 13:02:36.776654 containerd[1447]: time="2025-01-30T13:02:36.776289162Z" level=warning msg="cleaning up after shim disconnected" id=95204b6fc8d61e134fedc10e054fef8f1a873d4b07c7ebfad6cfa5543425dcbc namespace=k8s.io Jan 30 13:02:36.776654 containerd[1447]: time="2025-01-30T13:02:36.776298442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:02:36.816019 systemd[1]: Created slice kubepods-burstable-podc604cc46_99d4_4353_8500_4bb310160935.slice - libcontainer container kubepods-burstable-podc604cc46_99d4_4353_8500_4bb310160935.slice. Jan 30 13:02:36.826792 systemd[1]: Created slice kubepods-burstable-pod70d42510_d7d7_428a_b1a1_8b675ee51848.slice - libcontainer container kubepods-burstable-pod70d42510_d7d7_428a_b1a1_8b675ee51848.slice. Jan 30 13:02:36.835535 systemd[1]: Created slice kubepods-besteffort-pod3cc3298b_ef9c_4aeb_903e_4a8f5c9daf9d.slice - libcontainer container kubepods-besteffort-pod3cc3298b_ef9c_4aeb_903e_4a8f5c9daf9d.slice. Jan 30 13:02:36.841877 systemd[1]: Created slice kubepods-besteffort-pod375a330c_8230_4057_b70e_a0f2609c831f.slice - libcontainer container kubepods-besteffort-pod375a330c_8230_4057_b70e_a0f2609c831f.slice. Jan 30 13:02:36.859455 systemd[1]: Created slice kubepods-besteffort-pod6f5a0fbe_ae0d_4b06_9fb9_d4626175bd41.slice - libcontainer container kubepods-besteffort-pod6f5a0fbe_ae0d_4b06_9fb9_d4626175bd41.slice. Jan 30 13:02:36.969587 kubelet[2447]: I0130 13:02:36.969441 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g996f\" (UniqueName: \"kubernetes.io/projected/70d42510-d7d7-428a-b1a1-8b675ee51848-kube-api-access-g996f\") pod \"coredns-6f6b679f8f-drzkj\" (UID: \"70d42510-d7d7-428a-b1a1-8b675ee51848\") " pod="kube-system/coredns-6f6b679f8f-drzkj" Jan 30 13:02:36.969587 kubelet[2447]: I0130 13:02:36.969498 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzr8s\" (UniqueName: \"kubernetes.io/projected/6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41-kube-api-access-xzr8s\") pod \"calico-kube-controllers-849d69c5fc-hlqw7\" (UID: \"6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41\") " pod="calico-system/calico-kube-controllers-849d69c5fc-hlqw7" Jan 30 13:02:36.969587 kubelet[2447]: I0130 13:02:36.969532 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70d42510-d7d7-428a-b1a1-8b675ee51848-config-volume\") pod \"coredns-6f6b679f8f-drzkj\" (UID: \"70d42510-d7d7-428a-b1a1-8b675ee51848\") " pod="kube-system/coredns-6f6b679f8f-drzkj" Jan 30 13:02:36.969587 kubelet[2447]: I0130 13:02:36.969556 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqvz\" (UniqueName: \"kubernetes.io/projected/3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d-kube-api-access-fdqvz\") pod \"calico-apiserver-6cb779964f-8zwbp\" (UID: \"3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d\") " pod="calico-apiserver/calico-apiserver-6cb779964f-8zwbp" Jan 30 13:02:36.969587 kubelet[2447]: I0130 13:02:36.969578 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpnx\" (UniqueName: \"kubernetes.io/projected/c604cc46-99d4-4353-8500-4bb310160935-kube-api-access-5bpnx\") pod \"coredns-6f6b679f8f-k4kch\" (UID: \"c604cc46-99d4-4353-8500-4bb310160935\") " pod="kube-system/coredns-6f6b679f8f-k4kch" Jan 30 13:02:36.969838 kubelet[2447]: I0130 13:02:36.969596 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/375a330c-8230-4057-b70e-a0f2609c831f-calico-apiserver-certs\") pod \"calico-apiserver-6cb779964f-9bb6z\" (UID: \"375a330c-8230-4057-b70e-a0f2609c831f\") " pod="calico-apiserver/calico-apiserver-6cb779964f-9bb6z" Jan 30 13:02:36.969838 kubelet[2447]: I0130 13:02:36.969614 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj2k4\" (UniqueName: \"kubernetes.io/projected/375a330c-8230-4057-b70e-a0f2609c831f-kube-api-access-kj2k4\") pod \"calico-apiserver-6cb779964f-9bb6z\" (UID: \"375a330c-8230-4057-b70e-a0f2609c831f\") " pod="calico-apiserver/calico-apiserver-6cb779964f-9bb6z" Jan 30 13:02:36.969838 kubelet[2447]: I0130 13:02:36.969656 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d-calico-apiserver-certs\") pod \"calico-apiserver-6cb779964f-8zwbp\" (UID: \"3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d\") " pod="calico-apiserver/calico-apiserver-6cb779964f-8zwbp" Jan 30 13:02:36.969838 kubelet[2447]: I0130 13:02:36.969690 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c604cc46-99d4-4353-8500-4bb310160935-config-volume\") pod \"coredns-6f6b679f8f-k4kch\" (UID: \"c604cc46-99d4-4353-8500-4bb310160935\") " pod="kube-system/coredns-6f6b679f8f-k4kch" Jan 30 13:02:36.969838 kubelet[2447]: I0130 13:02:36.969706 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41-tigera-ca-bundle\") pod \"calico-kube-controllers-849d69c5fc-hlqw7\" (UID: \"6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41\") " pod="calico-system/calico-kube-controllers-849d69c5fc-hlqw7" Jan 30 13:02:37.121791 kubelet[2447]: E0130 13:02:37.121583 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:37.122815 containerd[1447]: time="2025-01-30T13:02:37.122663194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k4kch,Uid:c604cc46-99d4-4353-8500-4bb310160935,Namespace:kube-system,Attempt:0,}" Jan 30 13:02:37.129739 kubelet[2447]: E0130 13:02:37.126219 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:37.129927 containerd[1447]: time="2025-01-30T13:02:37.128931858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:02:37.134324 kubelet[2447]: E0130 13:02:37.134269 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:37.134958 containerd[1447]: time="2025-01-30T13:02:37.134905162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-drzkj,Uid:70d42510-d7d7-428a-b1a1-8b675ee51848,Namespace:kube-system,Attempt:0,}" Jan 30 13:02:37.140195 containerd[1447]: time="2025-01-30T13:02:37.139970829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb779964f-8zwbp,Uid:3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:02:37.145813 containerd[1447]: time="2025-01-30T13:02:37.145765854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb779964f-9bb6z,Uid:375a330c-8230-4057-b70e-a0f2609c831f,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:02:37.168950 containerd[1447]: time="2025-01-30T13:02:37.168904435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849d69c5fc-hlqw7,Uid:6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41,Namespace:calico-system,Attempt:0,}" Jan 30 13:02:37.786566 containerd[1447]: time="2025-01-30T13:02:37.786498970Z" level=error msg="Failed to destroy network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.787665 containerd[1447]: time="2025-01-30T13:02:37.787289888Z" level=error msg="Failed to destroy network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.788670 containerd[1447]: time="2025-01-30T13:02:37.787593768Z" level=error msg="encountered an error cleaning up failed sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.788775 containerd[1447]: time="2025-01-30T13:02:37.787999967Z" level=error msg="encountered an error cleaning up failed sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.788823 containerd[1447]: time="2025-01-30T13:02:37.788793204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-drzkj,Uid:70d42510-d7d7-428a-b1a1-8b675ee51848,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.788922 containerd[1447]: time="2025-01-30T13:02:37.788722885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb779964f-8zwbp,Uid:3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.791714 containerd[1447]: time="2025-01-30T13:02:37.791650557Z" level=error msg="Failed to destroy network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.792783 containerd[1447]: time="2025-01-30T13:02:37.792591915Z" level=error msg="encountered an error cleaning up failed sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.792984 containerd[1447]: time="2025-01-30T13:02:37.792957634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb779964f-9bb6z,Uid:375a330c-8230-4057-b70e-a0f2609c831f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.794520 containerd[1447]: time="2025-01-30T13:02:37.794472150Z" level=error msg="Failed to destroy network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.795074 kubelet[2447]: E0130 13:02:37.794966 2447 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.795074 kubelet[2447]: E0130 13:02:37.795007 2447 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.795074 kubelet[2447]: E0130 13:02:37.795100 2447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-drzkj" Jan 30 13:02:37.795074 kubelet[2447]: E0130 13:02:37.795110 2447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb779964f-8zwbp" Jan 30 13:02:37.795546 kubelet[2447]: E0130 13:02:37.795134 2447 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb779964f-8zwbp" Jan 30 13:02:37.795546 kubelet[2447]: E0130 13:02:37.794966 2447 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.795546 kubelet[2447]: E0130 13:02:37.795160 2447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb779964f-9bb6z" Jan 30 13:02:37.795546 kubelet[2447]: E0130 13:02:37.795173 2447 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb779964f-9bb6z" Jan 30 13:02:37.795830 kubelet[2447]: E0130 13:02:37.795198 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb779964f-8zwbp_calico-apiserver(3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb779964f-8zwbp_calico-apiserver(3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb779964f-8zwbp" podUID="3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d" Jan 30 13:02:37.795830 kubelet[2447]: E0130 13:02:37.795210 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb779964f-9bb6z_calico-apiserver(375a330c-8230-4057-b70e-a0f2609c831f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb779964f-9bb6z_calico-apiserver(375a330c-8230-4057-b70e-a0f2609c831f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb779964f-9bb6z" podUID="375a330c-8230-4057-b70e-a0f2609c831f" Jan 30 13:02:37.795946 kubelet[2447]: E0130 13:02:37.795131 2447 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-drzkj" Jan 30 13:02:37.795946 kubelet[2447]: E0130 13:02:37.795284 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-drzkj_kube-system(70d42510-d7d7-428a-b1a1-8b675ee51848)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-drzkj_kube-system(70d42510-d7d7-428a-b1a1-8b675ee51848)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-drzkj" podUID="70d42510-d7d7-428a-b1a1-8b675ee51848" Jan 30 13:02:37.796324 containerd[1447]: time="2025-01-30T13:02:37.796131306Z" level=error msg="encountered an error cleaning up failed sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.796324 containerd[1447]: time="2025-01-30T13:02:37.796205505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k4kch,Uid:c604cc46-99d4-4353-8500-4bb310160935,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.796911 kubelet[2447]: E0130 13:02:37.796724 2447 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.796911 kubelet[2447]: E0130 13:02:37.796802 2447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k4kch" Jan 30 13:02:37.796911 kubelet[2447]: E0130 13:02:37.796823 2447 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k4kch" Jan 30 13:02:37.797140 kubelet[2447]: E0130 13:02:37.796866 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k4kch_kube-system(c604cc46-99d4-4353-8500-4bb310160935)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k4kch_kube-system(c604cc46-99d4-4353-8500-4bb310160935)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k4kch" podUID="c604cc46-99d4-4353-8500-4bb310160935" Jan 30 13:02:37.804376 containerd[1447]: time="2025-01-30T13:02:37.804311965Z" level=error msg="Failed to destroy network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.804739 containerd[1447]: time="2025-01-30T13:02:37.804700284Z" level=error msg="encountered an error cleaning up failed sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.804794 containerd[1447]: time="2025-01-30T13:02:37.804760004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849d69c5fc-hlqw7,Uid:6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.805189 kubelet[2447]: E0130 13:02:37.805069 2447 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:37.805189 kubelet[2447]: E0130 13:02:37.805141 2447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849d69c5fc-hlqw7" Jan 30 13:02:37.805189 kubelet[2447]: E0130 13:02:37.805162 2447 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849d69c5fc-hlqw7" Jan 30 13:02:37.805358 kubelet[2447]: E0130 13:02:37.805207 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849d69c5fc-hlqw7_calico-system(6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849d69c5fc-hlqw7_calico-system(6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849d69c5fc-hlqw7" podUID="6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41" Jan 30 13:02:38.042348 systemd[1]: Created slice kubepods-besteffort-pode9f4c40c_83e4_4e09_bcf8_7d4d055d34c9.slice - libcontainer container kubepods-besteffort-pode9f4c40c_83e4_4e09_bcf8_7d4d055d34c9.slice. Jan 30 13:02:38.049596 containerd[1447]: time="2025-01-30T13:02:38.049528263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slr48,Uid:e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9,Namespace:calico-system,Attempt:0,}" Jan 30 13:02:38.128806 kubelet[2447]: I0130 13:02:38.128708 2447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:02:38.131576 kubelet[2447]: I0130 13:02:38.130771 2447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:02:38.131611 containerd[1447]: time="2025-01-30T13:02:38.129273911Z" level=info msg="StopPodSandbox for \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\"" Jan 30 13:02:38.131611 containerd[1447]: time="2025-01-30T13:02:38.129436471Z" level=info msg="Ensure that sandbox 73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc in task-service has been cleanup successfully" Jan 30 13:02:38.132239 containerd[1447]: time="2025-01-30T13:02:38.132201904Z" level=info msg="StopPodSandbox for \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\"" Jan 30 13:02:38.132501 containerd[1447]: time="2025-01-30T13:02:38.132478584Z" level=info msg="Ensure that sandbox 46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b in task-service has been cleanup successfully" Jan 30 13:02:38.139600 kubelet[2447]: I0130 13:02:38.139559 2447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:02:38.141716 kubelet[2447]: I0130 13:02:38.141670 2447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:02:38.142359 containerd[1447]: time="2025-01-30T13:02:38.142325760Z" level=info msg="StopPodSandbox for \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\"" Jan 30 13:02:38.142714 containerd[1447]: time="2025-01-30T13:02:38.142682319Z" level=info msg="Ensure that sandbox 4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e in task-service has been cleanup successfully" Jan 30 13:02:38.143875 kubelet[2447]: I0130 13:02:38.143834 2447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:02:38.144470 containerd[1447]: time="2025-01-30T13:02:38.144438955Z" level=info msg="StopPodSandbox for \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\"" Jan 30 13:02:38.145662 containerd[1447]: time="2025-01-30T13:02:38.145627552Z" level=info msg="Ensure that sandbox a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82 in task-service has been cleanup successfully" Jan 30 13:02:38.152543 containerd[1447]: time="2025-01-30T13:02:38.152489056Z" level=info msg="StopPodSandbox for \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\"" Jan 30 13:02:38.153163 containerd[1447]: time="2025-01-30T13:02:38.152931134Z" level=info msg="Ensure that sandbox 0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab in task-service has been cleanup successfully" Jan 30 13:02:38.169416 containerd[1447]: time="2025-01-30T13:02:38.169342335Z" level=error msg="Failed to destroy network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.170403 containerd[1447]: time="2025-01-30T13:02:38.170333493Z" level=error msg="encountered an error cleaning up failed sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.170489 containerd[1447]: time="2025-01-30T13:02:38.170428092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slr48,Uid:e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.170806 kubelet[2447]: E0130 13:02:38.170695 2447 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.170806 kubelet[2447]: E0130 13:02:38.170755 2447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-slr48" Jan 30 13:02:38.170806 kubelet[2447]: E0130 13:02:38.170788 2447 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-slr48" Jan 30 13:02:38.171345 kubelet[2447]: E0130 13:02:38.170838 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-slr48_calico-system(e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-slr48_calico-system(e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-slr48" podUID="e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9" Jan 30 13:02:38.172369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716-shm.mount: Deactivated successfully. Jan 30 13:02:38.211970 containerd[1447]: time="2025-01-30T13:02:38.211900633Z" level=error msg="StopPodSandbox for \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\" failed" error="failed to destroy network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.212340 kubelet[2447]: E0130 13:02:38.212138 2447 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:02:38.212340 kubelet[2447]: E0130 13:02:38.212195 2447 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e"} Jan 30 13:02:38.212340 kubelet[2447]: E0130 13:02:38.212256 2447 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70d42510-d7d7-428a-b1a1-8b675ee51848\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:02:38.212340 kubelet[2447]: E0130 13:02:38.212277 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70d42510-d7d7-428a-b1a1-8b675ee51848\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-drzkj" podUID="70d42510-d7d7-428a-b1a1-8b675ee51848" Jan 30 13:02:38.215412 containerd[1447]: time="2025-01-30T13:02:38.215355504Z" level=error msg="StopPodSandbox for \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\" failed" error="failed to destroy network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.215896 kubelet[2447]: E0130 13:02:38.215746 2447 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:02:38.216061 kubelet[2447]: E0130 13:02:38.215912 2447 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82"} Jan 30 13:02:38.216061 kubelet[2447]: E0130 13:02:38.215957 2447 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:02:38.216061 kubelet[2447]: E0130 13:02:38.215981 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849d69c5fc-hlqw7" podUID="6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41" Jan 30 13:02:38.218076 containerd[1447]: time="2025-01-30T13:02:38.218023058Z" level=error msg="StopPodSandbox for \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\" failed" error="failed to destroy network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.218284 kubelet[2447]: E0130 13:02:38.218242 2447 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:02:38.218284 kubelet[2447]: E0130 13:02:38.218292 2447 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b"} Jan 30 13:02:38.218284 kubelet[2447]: E0130 13:02:38.218329 2447 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c604cc46-99d4-4353-8500-4bb310160935\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:02:38.218284 kubelet[2447]: E0130 13:02:38.218349 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c604cc46-99d4-4353-8500-4bb310160935\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k4kch" podUID="c604cc46-99d4-4353-8500-4bb310160935" Jan 30 13:02:38.226838 containerd[1447]: time="2025-01-30T13:02:38.226781757Z" level=error msg="StopPodSandbox for \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\" failed" error="failed to destroy network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.227844 kubelet[2447]: E0130 13:02:38.227783 2447 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:02:38.227926 kubelet[2447]: E0130 13:02:38.227852 2447 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc"} Jan 30 13:02:38.227926 kubelet[2447]: E0130 13:02:38.227889 2447 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:02:38.227926 kubelet[2447]: E0130 13:02:38.227914 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb779964f-8zwbp" podUID="3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d" Jan 30 13:02:38.236876 containerd[1447]: time="2025-01-30T13:02:38.236758573Z" level=error msg="StopPodSandbox for \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\" failed" error="failed to destroy network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:38.237116 kubelet[2447]: E0130 13:02:38.237009 2447 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:02:38.237116 kubelet[2447]: E0130 13:02:38.237087 2447 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab"} Jan 30 13:02:38.237269 kubelet[2447]: E0130 13:02:38.237144 2447 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"375a330c-8230-4057-b70e-a0f2609c831f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:02:38.237269 kubelet[2447]: E0130 13:02:38.237168 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"375a330c-8230-4057-b70e-a0f2609c831f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb779964f-9bb6z" podUID="375a330c-8230-4057-b70e-a0f2609c831f" Jan 30 13:02:39.148149 kubelet[2447]: I0130 13:02:39.148035 2447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:02:39.149431 containerd[1447]: time="2025-01-30T13:02:39.148697841Z" level=info msg="StopPodSandbox for \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\"" Jan 30 13:02:39.149431 containerd[1447]: time="2025-01-30T13:02:39.148868521Z" level=info msg="Ensure that sandbox e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716 in task-service has been cleanup successfully" Jan 30 13:02:39.179551 containerd[1447]: time="2025-01-30T13:02:39.179407612Z" level=error msg="StopPodSandbox for \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\" failed" error="failed to destroy network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:02:39.179800 kubelet[2447]: E0130 13:02:39.179727 2447 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:02:39.179800 kubelet[2447]: E0130 13:02:39.179786 2447 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716"} Jan 30 13:02:39.179889 kubelet[2447]: E0130 13:02:39.179827 2447 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:02:39.179889 kubelet[2447]: E0130 13:02:39.179850 2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-slr48" podUID="e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9" Jan 30 13:02:40.404547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515088634.mount: Deactivated successfully. Jan 30 13:02:40.683739 containerd[1447]: time="2025-01-30T13:02:40.683298917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:40.684379 containerd[1447]: time="2025-01-30T13:02:40.684131315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 13:02:40.688399 containerd[1447]: time="2025-01-30T13:02:40.688099627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.55911769s" Jan 30 13:02:40.688399 containerd[1447]: time="2025-01-30T13:02:40.688150106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 13:02:40.692886 containerd[1447]: time="2025-01-30T13:02:40.692826177Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:40.693690 containerd[1447]: time="2025-01-30T13:02:40.693649615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:40.711660 containerd[1447]: time="2025-01-30T13:02:40.711253018Z" level=info msg="CreateContainer within sandbox \"fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:02:40.760573 containerd[1447]: time="2025-01-30T13:02:40.760497514Z" level=info msg="CreateContainer within sandbox \"fd7621f4b5a4e5bd02148ba75a4463aff693af6d89e18d03ff3331a6f0bfbd75\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"136962be708d1d76af575905d46c6bf1feab9c95fbe49b777b8a4d32c66ed8de\"" Jan 30 13:02:40.761226 containerd[1447]: time="2025-01-30T13:02:40.761083592Z" level=info msg="StartContainer for \"136962be708d1d76af575905d46c6bf1feab9c95fbe49b777b8a4d32c66ed8de\"" Jan 30 13:02:40.824855 systemd[1]: Started cri-containerd-136962be708d1d76af575905d46c6bf1feab9c95fbe49b777b8a4d32c66ed8de.scope - libcontainer container 136962be708d1d76af575905d46c6bf1feab9c95fbe49b777b8a4d32c66ed8de. Jan 30 13:02:40.861445 containerd[1447]: time="2025-01-30T13:02:40.861386340Z" level=info msg="StartContainer for \"136962be708d1d76af575905d46c6bf1feab9c95fbe49b777b8a4d32c66ed8de\" returns successfully" Jan 30 13:02:41.157285 kubelet[2447]: E0130 13:02:41.156149 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:41.177555 kubelet[2447]: I0130 13:02:41.177452 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rl5fw" podStartSLOduration=1.407128788 podStartE2EDuration="13.177434735s" podCreationTimestamp="2025-01-30 13:02:28 +0000 UTC" firstStartedPulling="2025-01-30 13:02:28.932101649 +0000 UTC m=+14.006665661" lastFinishedPulling="2025-01-30 13:02:40.702407596 +0000 UTC m=+25.776971608" observedRunningTime="2025-01-30 13:02:41.173477663 +0000 UTC m=+26.248041675" watchObservedRunningTime="2025-01-30 13:02:41.177434735 +0000 UTC m=+26.251998747" Jan 30 13:02:41.190669 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:02:41.190845 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jan 30 13:02:42.158884 kubelet[2447]: E0130 13:02:42.158831 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:43.160262 kubelet[2447]: E0130 13:02:43.160183 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:43.188122 kubelet[2447]: I0130 13:02:43.188071 2447 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:02:43.189106 kubelet[2447]: E0130 13:02:43.189059 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:44.000664 kernel: bpftool[3889]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:02:44.166252 kubelet[2447]: E0130 13:02:44.166215 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:44.204523 systemd-networkd[1371]: vxlan.calico: Link UP Jan 30 13:02:44.204531 systemd-networkd[1371]: vxlan.calico: Gained carrier Jan 30 13:02:45.433261 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jan 30 13:02:49.036230 containerd[1447]: time="2025-01-30T13:02:49.036136093Z" level=info msg="StopPodSandbox for \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\"" Jan 30 13:02:49.311247 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:39094.service - OpenSSH per-connection server daemon (10.0.0.1:39094). Jan 30 13:02:49.366565 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 39094 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:02:49.365668 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:49.378232 systemd-logind[1423]: New session 8 of user core. Jan 30 13:02:49.385953 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.160 [INFO][3981] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.161 [INFO][3981] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" iface="eth0" netns="/var/run/netns/cni-77cb01af-3fd3-49b2-b85d-5fced44d2d7e" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.165 [INFO][3981] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" iface="eth0" netns="/var/run/netns/cni-77cb01af-3fd3-49b2-b85d-5fced44d2d7e" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.169 [INFO][3981] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" iface="eth0" netns="/var/run/netns/cni-77cb01af-3fd3-49b2-b85d-5fced44d2d7e" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.169 [INFO][3981] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.169 [INFO][3981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.357 [INFO][3991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.357 [INFO][3991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.357 [INFO][3991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.384 [WARNING][3991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.384 [INFO][3991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.387 [INFO][3991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:49.392433 containerd[1447]: 2025-01-30 13:02:49.390 [INFO][3981] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:02:49.393755 containerd[1447]: time="2025-01-30T13:02:49.393712190Z" level=info msg="TearDown network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\" successfully" Jan 30 13:02:49.393755 containerd[1447]: time="2025-01-30T13:02:49.393753270Z" level=info msg="StopPodSandbox for \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\" returns successfully" Jan 30 13:02:49.394953 systemd[1]: run-netns-cni\x2d77cb01af\x2d3fd3\x2d49b2\x2db85d\x2d5fced44d2d7e.mount: Deactivated successfully. Jan 30 13:02:49.395590 containerd[1447]: time="2025-01-30T13:02:49.395480788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849d69c5fc-hlqw7,Uid:6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41,Namespace:calico-system,Attempt:1,}" Jan 30 13:02:49.578672 systemd-networkd[1371]: calia5b922e1ec4: Link UP Jan 30 13:02:49.580017 systemd-networkd[1371]: calia5b922e1ec4: Gained carrier Jan 30 13:02:49.583743 sshd[3998]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:49.593261 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:39094.service: Deactivated successfully. Jan 30 13:02:49.595530 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:02:49.596836 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:02:49.599605 systemd-logind[1423]: Removed session 8. Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.464 [INFO][4004] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0 calico-kube-controllers-849d69c5fc- calico-system 6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41 774 0 2025-01-30 13:02:28 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:849d69c5fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-849d69c5fc-hlqw7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia5b922e1ec4 [] []}} ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Namespace="calico-system" Pod="calico-kube-controllers-849d69c5fc-hlqw7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.464 [INFO][4004] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Namespace="calico-system" Pod="calico-kube-controllers-849d69c5fc-hlqw7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.508 [INFO][4028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" HandleID="k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.522 [INFO][4028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" HandleID="k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f48e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-849d69c5fc-hlqw7", "timestamp":"2025-01-30 13:02:49.508149775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.522 [INFO][4028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.522 [INFO][4028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.522 [INFO][4028] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.525 [INFO][4028] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.535 [INFO][4028] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.541 [INFO][4028] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.543 [INFO][4028] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.546 [INFO][4028] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.546 [INFO][4028] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.548 [INFO][4028] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7 Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.560 [INFO][4028] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.570 [INFO][4028] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.570 [INFO][4028] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" host="localhost" Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.570 [INFO][4028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:49.600391 containerd[1447]: 2025-01-30 13:02:49.570 [INFO][4028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" HandleID="k8s-pod-network.a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.602311 containerd[1447]: 2025-01-30 13:02:49.574 [INFO][4004] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Namespace="calico-system" Pod="calico-kube-controllers-849d69c5fc-hlqw7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0", GenerateName:"calico-kube-controllers-849d69c5fc-", Namespace:"calico-system", SelfLink:"", UID:"6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849d69c5fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-849d69c5fc-hlqw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5b922e1ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:49.602311 containerd[1447]: 2025-01-30 13:02:49.574 [INFO][4004] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Namespace="calico-system" Pod="calico-kube-controllers-849d69c5fc-hlqw7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.602311 containerd[1447]: 2025-01-30 13:02:49.574 [INFO][4004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5b922e1ec4 ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Namespace="calico-system" Pod="calico-kube-controllers-849d69c5fc-hlqw7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.602311 containerd[1447]: 2025-01-30 13:02:49.581 [INFO][4004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Namespace="calico-system" Pod="calico-kube-controllers-849d69c5fc-hlqw7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.602311 containerd[1447]: 2025-01-30 13:02:49.582 [INFO][4004] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Namespace="calico-system" Pod="calico-kube-controllers-849d69c5fc-hlqw7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0", GenerateName:"calico-kube-controllers-849d69c5fc-", Namespace:"calico-system", SelfLink:"", UID:"6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849d69c5fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7", Pod:"calico-kube-controllers-849d69c5fc-hlqw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5b922e1ec4", MAC:"b2:3c:00:b6:99:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:49.602311 containerd[1447]: 2025-01-30 13:02:49.597 [INFO][4004] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7" Namespace="calico-system" Pod="calico-kube-controllers-849d69c5fc-hlqw7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:02:49.632324 containerd[1447]: time="2025-01-30T13:02:49.632138988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:49.632324 containerd[1447]: time="2025-01-30T13:02:49.632237588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:49.632324 containerd[1447]: time="2025-01-30T13:02:49.632249868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:49.633913 containerd[1447]: time="2025-01-30T13:02:49.632359628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:49.658862 systemd[1]: Started cri-containerd-a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7.scope - libcontainer container a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7. Jan 30 13:02:49.674243 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:02:49.694011 containerd[1447]: time="2025-01-30T13:02:49.693882955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849d69c5fc-hlqw7,Uid:6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41,Namespace:calico-system,Attempt:1,} returns sandbox id \"a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7\"" Jan 30 13:02:49.697097 containerd[1447]: time="2025-01-30T13:02:49.695548953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:02:50.037534 containerd[1447]: time="2025-01-30T13:02:50.037472912Z" level=info msg="StopPodSandbox for \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\"" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.088 [INFO][4115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.088 [INFO][4115] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" iface="eth0" netns="/var/run/netns/cni-54dcfc73-77e9-4061-0a7f-62a880c1e2ea" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.088 [INFO][4115] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" iface="eth0" netns="/var/run/netns/cni-54dcfc73-77e9-4061-0a7f-62a880c1e2ea" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.089 [INFO][4115] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" iface="eth0" netns="/var/run/netns/cni-54dcfc73-77e9-4061-0a7f-62a880c1e2ea" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.089 [INFO][4115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.089 [INFO][4115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.120 [INFO][4123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.120 [INFO][4123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.120 [INFO][4123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.132 [WARNING][4123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.132 [INFO][4123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.135 [INFO][4123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:50.139749 containerd[1447]: 2025-01-30 13:02:50.138 [INFO][4115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:02:50.140886 containerd[1447]: time="2025-01-30T13:02:50.139867278Z" level=info msg="TearDown network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\" successfully" Jan 30 13:02:50.140886 containerd[1447]: time="2025-01-30T13:02:50.139896438Z" level=info msg="StopPodSandbox for \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\" returns successfully" Jan 30 13:02:50.140886 containerd[1447]: time="2025-01-30T13:02:50.140651037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb779964f-9bb6z,Uid:375a330c-8230-4057-b70e-a0f2609c831f,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:02:50.279924 systemd-networkd[1371]: caliecbae0db9ae: Link UP Jan 30 13:02:50.280070 systemd-networkd[1371]: caliecbae0db9ae: Gained carrier Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.204 [INFO][4133] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0 calico-apiserver-6cb779964f- calico-apiserver 375a330c-8230-4057-b70e-a0f2609c831f 802 0 2025-01-30 13:02:27 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cb779964f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cb779964f-9bb6z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliecbae0db9ae [] []}} ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-9bb6z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.204 [INFO][4133] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-9bb6z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.229 [INFO][4147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" HandleID="k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.242 [INFO][4147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" HandleID="k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cb779964f-9bb6z", "timestamp":"2025-01-30 13:02:50.229218539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.242 [INFO][4147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.242 [INFO][4147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.242 [INFO][4147] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.244 [INFO][4147] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.248 [INFO][4147] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.252 [INFO][4147] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.254 [INFO][4147] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.260 [INFO][4147] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.260 [INFO][4147] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.262 [INFO][4147] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0 Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.266 [INFO][4147] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.272 [INFO][4147] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.272 [INFO][4147] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" host="localhost" Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.272 [INFO][4147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:50.292301 containerd[1447]: 2025-01-30 13:02:50.272 [INFO][4147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" HandleID="k8s-pod-network.ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.292934 containerd[1447]: 2025-01-30 13:02:50.274 [INFO][4133] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-9bb6z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0", GenerateName:"calico-apiserver-6cb779964f-", Namespace:"calico-apiserver", SelfLink:"", UID:"375a330c-8230-4057-b70e-a0f2609c831f", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb779964f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cb779964f-9bb6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecbae0db9ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:50.292934 containerd[1447]: 2025-01-30 13:02:50.274 [INFO][4133] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-9bb6z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.292934 containerd[1447]: 2025-01-30 13:02:50.274 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecbae0db9ae ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-9bb6z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.292934 containerd[1447]: 2025-01-30 13:02:50.279 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-9bb6z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.292934 containerd[1447]: 2025-01-30 13:02:50.279 [INFO][4133] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-9bb6z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0", GenerateName:"calico-apiserver-6cb779964f-", Namespace:"calico-apiserver", SelfLink:"", UID:"375a330c-8230-4057-b70e-a0f2609c831f", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb779964f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0", Pod:"calico-apiserver-6cb779964f-9bb6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecbae0db9ae", MAC:"86:c9:f0:64:66:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:50.292934 containerd[1447]: 2025-01-30 13:02:50.290 [INFO][4133] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-9bb6z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:02:50.315347 containerd[1447]: time="2025-01-30T13:02:50.314946324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:50.315347 containerd[1447]: time="2025-01-30T13:02:50.314998564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:50.315347 containerd[1447]: time="2025-01-30T13:02:50.315009404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:50.315347 containerd[1447]: time="2025-01-30T13:02:50.315077884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:50.334843 systemd[1]: Started cri-containerd-ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0.scope - libcontainer container ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0. Jan 30 13:02:50.349882 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:02:50.372202 containerd[1447]: time="2025-01-30T13:02:50.372137021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb779964f-9bb6z,Uid:375a330c-8230-4057-b70e-a0f2609c831f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0\"" Jan 30 13:02:50.396464 systemd[1]: run-netns-cni\x2d54dcfc73\x2d77e9\x2d4061\x2d0a7f\x2d62a880c1e2ea.mount: Deactivated successfully. Jan 30 13:02:51.001354 systemd-networkd[1371]: calia5b922e1ec4: Gained IPv6LL Jan 30 13:02:51.036980 containerd[1447]: time="2025-01-30T13:02:51.036939686Z" level=info msg="StopPodSandbox for \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\"" Jan 30 13:02:51.038572 containerd[1447]: time="2025-01-30T13:02:51.038104405Z" level=info msg="StopPodSandbox for \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\"" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.113 [INFO][4245] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.113 [INFO][4245] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" iface="eth0" netns="/var/run/netns/cni-0799084f-e89b-2cc9-e602-0623e6d7e961" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.114 [INFO][4245] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" iface="eth0" netns="/var/run/netns/cni-0799084f-e89b-2cc9-e602-0623e6d7e961" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4245] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" iface="eth0" netns="/var/run/netns/cni-0799084f-e89b-2cc9-e602-0623e6d7e961" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4245] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.140 [INFO][4260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.140 [INFO][4260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.140 [INFO][4260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.150 [WARNING][4260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.150 [INFO][4260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.152 [INFO][4260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:51.156506 containerd[1447]: 2025-01-30 13:02:51.153 [INFO][4245] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:02:51.158535 containerd[1447]: time="2025-01-30T13:02:51.158489640Z" level=info msg="TearDown network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\" successfully" Jan 30 13:02:51.158600 containerd[1447]: time="2025-01-30T13:02:51.158532640Z" level=info msg="StopPodSandbox for \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\" returns successfully" Jan 30 13:02:51.158554 systemd[1]: run-netns-cni\x2d0799084f\x2de89b\x2d2cc9\x2de602\x2d0623e6d7e961.mount: Deactivated successfully. Jan 30 13:02:51.159701 containerd[1447]: time="2025-01-30T13:02:51.159669838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb779964f-8zwbp,Uid:3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4244] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4244] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" iface="eth0" netns="/var/run/netns/cni-271573a7-67e6-5513-48e8-03f1358a3d57" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4244] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" iface="eth0" netns="/var/run/netns/cni-271573a7-67e6-5513-48e8-03f1358a3d57" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4244] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" iface="eth0" netns="/var/run/netns/cni-271573a7-67e6-5513-48e8-03f1358a3d57" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4244] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.115 [INFO][4244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.143 [INFO][4261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.143 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.152 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.161 [WARNING][4261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.161 [INFO][4261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.163 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:51.166218 containerd[1447]: 2025-01-30 13:02:51.164 [INFO][4244] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:02:51.167176 containerd[1447]: time="2025-01-30T13:02:51.167048991Z" level=info msg="TearDown network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\" successfully" Jan 30 13:02:51.167176 containerd[1447]: time="2025-01-30T13:02:51.167081591Z" level=info msg="StopPodSandbox for \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\" returns successfully" Jan 30 13:02:51.167663 containerd[1447]: time="2025-01-30T13:02:51.167640790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slr48,Uid:e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9,Namespace:calico-system,Attempt:1,}" Jan 30 13:02:51.168635 systemd[1]: run-netns-cni\x2d271573a7\x2d67e6\x2d5513\x2d48e8\x2d03f1358a3d57.mount: Deactivated successfully. Jan 30 13:02:51.335494 systemd-networkd[1371]: cali978ee8f3de7: Link UP Jan 30 13:02:51.335709 systemd-networkd[1371]: cali978ee8f3de7: Gained carrier Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.244 [INFO][4285] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--slr48-eth0 csi-node-driver- calico-system e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9 816 0 2025-01-30 13:02:28 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-slr48 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali978ee8f3de7 [] []}} ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Namespace="calico-system" Pod="csi-node-driver-slr48" WorkloadEndpoint="localhost-k8s-csi--node--driver--slr48-" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.245 [INFO][4285] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Namespace="calico-system" Pod="csi-node-driver-slr48" WorkloadEndpoint="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.281 [INFO][4304] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" HandleID="k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.293 [INFO][4304] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" HandleID="k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Workload="localhost-k8s-csi--node--driver--slr48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-slr48", "timestamp":"2025-01-30 13:02:51.281426032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.294 [INFO][4304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.294 [INFO][4304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.294 [INFO][4304] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.296 [INFO][4304] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.301 [INFO][4304] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.306 [INFO][4304] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.313 [INFO][4304] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.318 [INFO][4304] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.318 [INFO][4304] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.319 [INFO][4304] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.323 [INFO][4304] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.330 [INFO][4304] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.330 [INFO][4304] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" host="localhost" Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.330 [INFO][4304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:51.361642 containerd[1447]: 2025-01-30 13:02:51.330 [INFO][4304] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" HandleID="k8s-pod-network.8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.362306 containerd[1447]: 2025-01-30 13:02:51.332 [INFO][4285] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Namespace="calico-system" Pod="csi-node-driver-slr48" WorkloadEndpoint="localhost-k8s-csi--node--driver--slr48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--slr48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-slr48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali978ee8f3de7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:51.362306 containerd[1447]: 2025-01-30 13:02:51.332 [INFO][4285] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Namespace="calico-system" Pod="csi-node-driver-slr48" WorkloadEndpoint="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.362306 containerd[1447]: 2025-01-30 13:02:51.332 [INFO][4285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali978ee8f3de7 ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Namespace="calico-system" Pod="csi-node-driver-slr48" WorkloadEndpoint="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.362306 containerd[1447]: 2025-01-30 13:02:51.335 [INFO][4285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Namespace="calico-system" Pod="csi-node-driver-slr48" WorkloadEndpoint="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.362306 containerd[1447]: 2025-01-30 13:02:51.336 [INFO][4285] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Namespace="calico-system" Pod="csi-node-driver-slr48" WorkloadEndpoint="localhost-k8s-csi--node--driver--slr48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--slr48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d", Pod:"csi-node-driver-slr48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali978ee8f3de7", MAC:"f2:cd:d9:80:c5:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:51.362306 containerd[1447]: 2025-01-30 13:02:51.353 [INFO][4285] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d" Namespace="calico-system" Pod="csi-node-driver-slr48" WorkloadEndpoint="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:02:51.442883 systemd-networkd[1371]: cali1ad9239eb17: Link UP Jan 30 13:02:51.443526 systemd-networkd[1371]: cali1ad9239eb17: Gained carrier Jan 30 13:02:51.446526 containerd[1447]: time="2025-01-30T13:02:51.446408260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:51.446526 containerd[1447]: time="2025-01-30T13:02:51.446482900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:51.446526 containerd[1447]: time="2025-01-30T13:02:51.446509180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:51.446823 containerd[1447]: time="2025-01-30T13:02:51.446598460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.245 [INFO][4275] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0 calico-apiserver-6cb779964f- calico-apiserver 3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d 817 0 2025-01-30 13:02:27 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cb779964f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cb779964f-8zwbp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1ad9239eb17 [] []}} ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-8zwbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.246 [INFO][4275] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-8zwbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.304 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" HandleID="k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.319 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" HandleID="k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030c710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cb779964f-8zwbp", "timestamp":"2025-01-30 13:02:51.304382288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.319 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.330 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.330 [INFO][4309] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.398 [INFO][4309] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.402 [INFO][4309] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.407 [INFO][4309] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.410 [INFO][4309] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.414 [INFO][4309] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.414 [INFO][4309] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.423 [INFO][4309] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1 Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.427 [INFO][4309] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.435 [INFO][4309] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.435 [INFO][4309] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" host="localhost" Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.435 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:51.466738 containerd[1447]: 2025-01-30 13:02:51.435 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" HandleID="k8s-pod-network.c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.467442 containerd[1447]: 2025-01-30 13:02:51.439 [INFO][4275] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-8zwbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0", GenerateName:"calico-apiserver-6cb779964f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb779964f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cb779964f-8zwbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ad9239eb17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:51.467442 containerd[1447]: 2025-01-30 13:02:51.439 [INFO][4275] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-8zwbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.467442 containerd[1447]: 2025-01-30 13:02:51.439 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ad9239eb17 ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-8zwbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.467442 containerd[1447]: 2025-01-30 13:02:51.443 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-8zwbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.467442 containerd[1447]: 2025-01-30 13:02:51.443 [INFO][4275] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-8zwbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0", GenerateName:"calico-apiserver-6cb779964f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb779964f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1", Pod:"calico-apiserver-6cb779964f-8zwbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ad9239eb17", MAC:"fa:d3:2a:9e:ef:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:51.467442 containerd[1447]: 2025-01-30 13:02:51.457 [INFO][4275] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1" Namespace="calico-apiserver" Pod="calico-apiserver-6cb779964f-8zwbp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:02:51.481913 containerd[1447]: time="2025-01-30T13:02:51.481870943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:51.486722 containerd[1447]: time="2025-01-30T13:02:51.486610699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 13:02:51.487643 containerd[1447]: time="2025-01-30T13:02:51.487289018Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:51.488973 systemd[1]: Started cri-containerd-8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d.scope - libcontainer container 8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d. Jan 30 13:02:51.493094 containerd[1447]: time="2025-01-30T13:02:51.492670492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:51.494122 containerd[1447]: time="2025-01-30T13:02:51.494074331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.798482738s" Jan 30 13:02:51.494197 containerd[1447]: time="2025-01-30T13:02:51.494145931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 13:02:51.495957 containerd[1447]: time="2025-01-30T13:02:51.495920969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:02:51.496784 containerd[1447]: time="2025-01-30T13:02:51.496437728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:51.496784 containerd[1447]: time="2025-01-30T13:02:51.496506848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:51.496784 containerd[1447]: time="2025-01-30T13:02:51.496517888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:51.496784 containerd[1447]: time="2025-01-30T13:02:51.496602608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:51.506756 containerd[1447]: time="2025-01-30T13:02:51.506676238Z" level=info msg="CreateContainer within sandbox \"a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:02:51.508258 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:02:51.521944 systemd[1]: Started cri-containerd-c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1.scope - libcontainer container c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1. Jan 30 13:02:51.522713 containerd[1447]: time="2025-01-30T13:02:51.522611301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slr48,Uid:e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9,Namespace:calico-system,Attempt:1,} returns sandbox id \"8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d\"" Jan 30 13:02:51.524798 containerd[1447]: time="2025-01-30T13:02:51.524745539Z" level=info msg="CreateContainer within sandbox \"a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8d7570b6cd9d9ea49e5976e233af1291e66a8b790644c4959da93ea5ab53ffeb\"" Jan 30 13:02:51.525189 containerd[1447]: time="2025-01-30T13:02:51.525155379Z" level=info msg="StartContainer for \"8d7570b6cd9d9ea49e5976e233af1291e66a8b790644c4959da93ea5ab53ffeb\"" Jan 30 13:02:51.536741 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:02:51.561095 containerd[1447]: time="2025-01-30T13:02:51.561056381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb779964f-8zwbp,Uid:3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1\"" Jan 30 13:02:51.575822 systemd[1]: Started cri-containerd-8d7570b6cd9d9ea49e5976e233af1291e66a8b790644c4959da93ea5ab53ffeb.scope - libcontainer container 8d7570b6cd9d9ea49e5976e233af1291e66a8b790644c4959da93ea5ab53ffeb. Jan 30 13:02:51.614428 containerd[1447]: time="2025-01-30T13:02:51.614304966Z" level=info msg="StartContainer for \"8d7570b6cd9d9ea49e5976e233af1291e66a8b790644c4959da93ea5ab53ffeb\" returns successfully" Jan 30 13:02:52.039384 containerd[1447]: time="2025-01-30T13:02:52.039056247Z" level=info msg="StopPodSandbox for \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\"" Jan 30 13:02:52.039384 containerd[1447]: time="2025-01-30T13:02:52.039084687Z" level=info msg="StopPodSandbox for \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\"" Jan 30 13:02:52.088099 systemd-networkd[1371]: caliecbae0db9ae: Gained IPv6LL Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.094 [INFO][4502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.094 [INFO][4502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" iface="eth0" netns="/var/run/netns/cni-fac98e07-f1b4-fc61-d517-d7950ecb644e" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.095 [INFO][4502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" iface="eth0" netns="/var/run/netns/cni-fac98e07-f1b4-fc61-d517-d7950ecb644e" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.095 [INFO][4502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" iface="eth0" netns="/var/run/netns/cni-fac98e07-f1b4-fc61-d517-d7950ecb644e" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.095 [INFO][4502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.095 [INFO][4502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.120 [INFO][4518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.120 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.120 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.129 [WARNING][4518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.129 [INFO][4518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.130 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:52.136045 containerd[1447]: 2025-01-30 13:02:52.132 [INFO][4502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:02:52.138651 containerd[1447]: time="2025-01-30T13:02:52.136709312Z" level=info msg="TearDown network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\" successfully" Jan 30 13:02:52.138651 containerd[1447]: time="2025-01-30T13:02:52.136752232Z" level=info msg="StopPodSandbox for \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\" returns successfully" Jan 30 13:02:52.138651 containerd[1447]: time="2025-01-30T13:02:52.137935950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-drzkj,Uid:70d42510-d7d7-428a-b1a1-8b675ee51848,Namespace:kube-system,Attempt:1,}" Jan 30 13:02:52.138824 kubelet[2447]: E0130 13:02:52.137168 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.094 [INFO][4503] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.094 [INFO][4503] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" iface="eth0" netns="/var/run/netns/cni-5979a830-6121-105d-b595-27c8f485de35" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.094 [INFO][4503] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" iface="eth0" netns="/var/run/netns/cni-5979a830-6121-105d-b595-27c8f485de35" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.095 [INFO][4503] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" iface="eth0" netns="/var/run/netns/cni-5979a830-6121-105d-b595-27c8f485de35" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.095 [INFO][4503] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.095 [INFO][4503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.120 [INFO][4517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.120 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.130 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.141 [WARNING][4517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.141 [INFO][4517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.145 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:52.149888 containerd[1447]: 2025-01-30 13:02:52.147 [INFO][4503] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:02:52.150342 containerd[1447]: time="2025-01-30T13:02:52.150008539Z" level=info msg="TearDown network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\" successfully" Jan 30 13:02:52.150342 containerd[1447]: time="2025-01-30T13:02:52.150034459Z" level=info msg="StopPodSandbox for \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\" returns successfully" Jan 30 13:02:52.150395 kubelet[2447]: E0130 13:02:52.150307 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:52.151337 containerd[1447]: time="2025-01-30T13:02:52.151221697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k4kch,Uid:c604cc46-99d4-4353-8500-4bb310160935,Namespace:kube-system,Attempt:1,}" Jan 30 13:02:52.295058 kubelet[2447]: I0130 13:02:52.294919 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-849d69c5fc-hlqw7" podStartSLOduration=22.495121581 podStartE2EDuration="24.294898077s" podCreationTimestamp="2025-01-30 13:02:28 +0000 UTC" firstStartedPulling="2025-01-30 13:02:49.695332914 +0000 UTC m=+34.769896926" lastFinishedPulling="2025-01-30 13:02:51.49510945 +0000 UTC m=+36.569673422" observedRunningTime="2025-01-30 13:02:52.233810257 +0000 UTC m=+37.308374269" watchObservedRunningTime="2025-01-30 13:02:52.294898077 +0000 UTC m=+37.369462089" Jan 30 13:02:52.332290 systemd-networkd[1371]: calib9e90fec21e: Link UP Jan 30 13:02:52.332808 systemd-networkd[1371]: calib9e90fec21e: Gained carrier Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.215 [INFO][4542] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--k4kch-eth0 coredns-6f6b679f8f- kube-system c604cc46-99d4-4353-8500-4bb310160935 840 0 2025-01-30 13:02:21 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-k4kch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib9e90fec21e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Namespace="kube-system" Pod="coredns-6f6b679f8f-k4kch" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k4kch-" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.217 [INFO][4542] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Namespace="kube-system" Pod="coredns-6f6b679f8f-k4kch" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.265 [INFO][4572] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" HandleID="k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.281 [INFO][4572] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" HandleID="k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aaf20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-k4kch", "timestamp":"2025-01-30 13:02:52.265967266 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.281 [INFO][4572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.281 [INFO][4572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.281 [INFO][4572] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.284 [INFO][4572] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.290 [INFO][4572] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.298 [INFO][4572] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.302 [INFO][4572] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.305 [INFO][4572] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.305 [INFO][4572] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.309 [INFO][4572] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.313 [INFO][4572] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.327 [INFO][4572] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.327 [INFO][4572] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" host="localhost" Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.327 [INFO][4572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:52.350016 containerd[1447]: 2025-01-30 13:02:52.327 [INFO][4572] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" HandleID="k8s-pod-network.d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.350642 containerd[1447]: 2025-01-30 13:02:52.330 [INFO][4542] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Namespace="kube-system" Pod="coredns-6f6b679f8f-k4kch" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--k4kch-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c604cc46-99d4-4353-8500-4bb310160935", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-k4kch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9e90fec21e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:52.350642 containerd[1447]: 2025-01-30 13:02:52.330 [INFO][4542] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Namespace="kube-system" Pod="coredns-6f6b679f8f-k4kch" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.350642 containerd[1447]: 2025-01-30 13:02:52.330 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9e90fec21e ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Namespace="kube-system" Pod="coredns-6f6b679f8f-k4kch" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.350642 containerd[1447]: 2025-01-30 13:02:52.332 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Namespace="kube-system" Pod="coredns-6f6b679f8f-k4kch" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.350642 containerd[1447]: 2025-01-30 13:02:52.333 [INFO][4542] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Namespace="kube-system" Pod="coredns-6f6b679f8f-k4kch" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--k4kch-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c604cc46-99d4-4353-8500-4bb310160935", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e", Pod:"coredns-6f6b679f8f-k4kch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9e90fec21e", MAC:"a6:9c:63:8a:7f:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:52.350642 containerd[1447]: 2025-01-30 13:02:52.344 [INFO][4542] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e" Namespace="kube-system" Pod="coredns-6f6b679f8f-k4kch" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:02:52.384806 containerd[1447]: time="2025-01-30T13:02:52.383850511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:52.384806 containerd[1447]: time="2025-01-30T13:02:52.383914591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:52.384806 containerd[1447]: time="2025-01-30T13:02:52.383924911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:52.384806 containerd[1447]: time="2025-01-30T13:02:52.384003151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:52.419704 systemd[1]: run-netns-cni\x2dfac98e07\x2df1b4\x2dfc61\x2dd517\x2dd7950ecb644e.mount: Deactivated successfully. Jan 30 13:02:52.419921 systemd[1]: run-netns-cni\x2d5979a830\x2d6121\x2d105d\x2db595\x2d27c8f485de35.mount: Deactivated successfully. Jan 30 13:02:52.432839 systemd[1]: Started cri-containerd-d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e.scope - libcontainer container d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e. Jan 30 13:02:52.447170 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:02:52.457287 systemd-networkd[1371]: cali212b073d8c1: Link UP Jan 30 13:02:52.457707 systemd-networkd[1371]: cali212b073d8c1: Gained carrier Jan 30 13:02:52.473335 containerd[1447]: time="2025-01-30T13:02:52.473283544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k4kch,Uid:c604cc46-99d4-4353-8500-4bb310160935,Namespace:kube-system,Attempt:1,} returns sandbox id \"d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e\"" Jan 30 13:02:52.474125 kubelet[2447]: E0130 13:02:52.474093 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:52.476416 containerd[1447]: time="2025-01-30T13:02:52.476378141Z" level=info msg="CreateContainer within sandbox \"d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.210 [INFO][4531] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--drzkj-eth0 coredns-6f6b679f8f- kube-system 70d42510-d7d7-428a-b1a1-8b675ee51848 839 0 2025-01-30 13:02:21 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-drzkj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali212b073d8c1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Namespace="kube-system" Pod="coredns-6f6b679f8f-drzkj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--drzkj-" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.211 [INFO][4531] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Namespace="kube-system" Pod="coredns-6f6b679f8f-drzkj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.265 [INFO][4564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" HandleID="k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.283 [INFO][4564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" HandleID="k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a9080), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-drzkj", "timestamp":"2025-01-30 13:02:52.265076227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.283 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.328 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.328 [INFO][4564] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.385 [INFO][4564] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.418 [INFO][4564] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.427 [INFO][4564] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.430 [INFO][4564] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.434 [INFO][4564] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.434 [INFO][4564] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.437 [INFO][4564] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.442 [INFO][4564] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.451 [INFO][4564] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.451 [INFO][4564] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" host="localhost" Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.451 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:02:52.484286 containerd[1447]: 2025-01-30 13:02:52.451 [INFO][4564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" HandleID="k8s-pod-network.253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.485858 containerd[1447]: 2025-01-30 13:02:52.454 [INFO][4531] cni-plugin/k8s.go 386: Populated endpoint ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Namespace="kube-system" Pod="coredns-6f6b679f8f-drzkj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--drzkj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"70d42510-d7d7-428a-b1a1-8b675ee51848", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-drzkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali212b073d8c1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:52.485858 containerd[1447]: 2025-01-30 13:02:52.454 [INFO][4531] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Namespace="kube-system" Pod="coredns-6f6b679f8f-drzkj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.485858 containerd[1447]: 2025-01-30 13:02:52.454 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali212b073d8c1 ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Namespace="kube-system" Pod="coredns-6f6b679f8f-drzkj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.485858 containerd[1447]: 2025-01-30 13:02:52.457 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Namespace="kube-system" Pod="coredns-6f6b679f8f-drzkj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.485858 containerd[1447]: 2025-01-30 13:02:52.457 [INFO][4531] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Namespace="kube-system" Pod="coredns-6f6b679f8f-drzkj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--drzkj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"70d42510-d7d7-428a-b1a1-8b675ee51848", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a", Pod:"coredns-6f6b679f8f-drzkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali212b073d8c1", MAC:"06:b7:2d:72:90:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:02:52.485858 containerd[1447]: 2025-01-30 13:02:52.481 [INFO][4531] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a" Namespace="kube-system" Pod="coredns-6f6b679f8f-drzkj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:02:52.500124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2420321480.mount: Deactivated successfully. Jan 30 13:02:52.503725 containerd[1447]: time="2025-01-30T13:02:52.503666554Z" level=info msg="CreateContainer within sandbox \"d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b6faa120da3ef329199ee77deb6f720a21864f0a565fe827be7bfbcfa913bf6\"" Jan 30 13:02:52.504250 containerd[1447]: time="2025-01-30T13:02:52.504197873Z" level=info msg="StartContainer for \"5b6faa120da3ef329199ee77deb6f720a21864f0a565fe827be7bfbcfa913bf6\"" Jan 30 13:02:52.517837 containerd[1447]: time="2025-01-30T13:02:52.517599740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:02:52.517837 containerd[1447]: time="2025-01-30T13:02:52.517663860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:02:52.517837 containerd[1447]: time="2025-01-30T13:02:52.517683780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:52.517837 containerd[1447]: time="2025-01-30T13:02:52.517772540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:02:52.540115 systemd[1]: Started cri-containerd-253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a.scope - libcontainer container 253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a. Jan 30 13:02:52.544131 systemd[1]: Started cri-containerd-5b6faa120da3ef329199ee77deb6f720a21864f0a565fe827be7bfbcfa913bf6.scope - libcontainer container 5b6faa120da3ef329199ee77deb6f720a21864f0a565fe827be7bfbcfa913bf6. Jan 30 13:02:52.564037 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:02:52.588907 containerd[1447]: time="2025-01-30T13:02:52.588609671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-drzkj,Uid:70d42510-d7d7-428a-b1a1-8b675ee51848,Namespace:kube-system,Attempt:1,} returns sandbox id \"253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a\"" Jan 30 13:02:52.591805 kubelet[2447]: E0130 13:02:52.591691 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:52.593647 containerd[1447]: time="2025-01-30T13:02:52.593382827Z" level=info msg="CreateContainer within sandbox \"253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:02:52.599993 containerd[1447]: time="2025-01-30T13:02:52.599943620Z" level=info msg="StartContainer for \"5b6faa120da3ef329199ee77deb6f720a21864f0a565fe827be7bfbcfa913bf6\" returns successfully" Jan 30 13:02:52.613440 containerd[1447]: time="2025-01-30T13:02:52.613315767Z" level=info msg="CreateContainer within sandbox \"253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f7f8679a848794ff653e440121ef9dfd84fe953e86028e7a7f05b131f747a19\"" Jan 30 13:02:52.614713 containerd[1447]: time="2025-01-30T13:02:52.614665766Z" level=info msg="StartContainer for \"4f7f8679a848794ff653e440121ef9dfd84fe953e86028e7a7f05b131f747a19\"" Jan 30 13:02:52.659929 systemd[1]: Started cri-containerd-4f7f8679a848794ff653e440121ef9dfd84fe953e86028e7a7f05b131f747a19.scope - libcontainer container 4f7f8679a848794ff653e440121ef9dfd84fe953e86028e7a7f05b131f747a19. Jan 30 13:02:52.713154 containerd[1447]: time="2025-01-30T13:02:52.713105190Z" level=info msg="StartContainer for \"4f7f8679a848794ff653e440121ef9dfd84fe953e86028e7a7f05b131f747a19\" returns successfully" Jan 30 13:02:52.919812 systemd-networkd[1371]: cali978ee8f3de7: Gained IPv6LL Jan 30 13:02:52.983874 systemd-networkd[1371]: cali1ad9239eb17: Gained IPv6LL Jan 30 13:02:53.132883 containerd[1447]: time="2025-01-30T13:02:53.132834029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:53.133949 containerd[1447]: time="2025-01-30T13:02:53.133862148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 13:02:53.134844 containerd[1447]: time="2025-01-30T13:02:53.134808547Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:53.141465 containerd[1447]: time="2025-01-30T13:02:53.141064701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:53.141801 containerd[1447]: time="2025-01-30T13:02:53.141761461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.645798092s" Jan 30 13:02:53.141886 containerd[1447]: time="2025-01-30T13:02:53.141869941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 13:02:53.143379 containerd[1447]: time="2025-01-30T13:02:53.143345739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:02:53.148008 containerd[1447]: time="2025-01-30T13:02:53.147967175Z" level=info msg="CreateContainer within sandbox \"ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:02:53.158700 containerd[1447]: time="2025-01-30T13:02:53.158546685Z" level=info msg="CreateContainer within sandbox \"ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"81656b07cfe44a06196f9affa4532bf9052b2a2a468ad2c5389278af9657a985\"" Jan 30 13:02:53.159327 containerd[1447]: time="2025-01-30T13:02:53.159297765Z" level=info msg="StartContainer for \"81656b07cfe44a06196f9affa4532bf9052b2a2a468ad2c5389278af9657a985\"" Jan 30 13:02:53.181810 systemd[1]: Started cri-containerd-81656b07cfe44a06196f9affa4532bf9052b2a2a468ad2c5389278af9657a985.scope - libcontainer container 81656b07cfe44a06196f9affa4532bf9052b2a2a468ad2c5389278af9657a985. Jan 30 13:02:53.217923 containerd[1447]: time="2025-01-30T13:02:53.217873151Z" level=info msg="StartContainer for \"81656b07cfe44a06196f9affa4532bf9052b2a2a468ad2c5389278af9657a985\" returns successfully" Jan 30 13:02:53.228736 kubelet[2447]: E0130 13:02:53.228702 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:53.233278 kubelet[2447]: E0130 13:02:53.232849 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:53.252655 kubelet[2447]: I0130 13:02:53.252585 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-drzkj" podStartSLOduration=32.252567599 podStartE2EDuration="32.252567599s" podCreationTimestamp="2025-01-30 13:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:02:53.25245716 +0000 UTC m=+38.327021172" watchObservedRunningTime="2025-01-30 13:02:53.252567599 +0000 UTC m=+38.327131611" Jan 30 13:02:53.301125 kubelet[2447]: I0130 13:02:53.299596 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cb779964f-9bb6z" podStartSLOduration=23.531386835 podStartE2EDuration="26.299576317s" podCreationTimestamp="2025-01-30 13:02:27 +0000 UTC" firstStartedPulling="2025-01-30 13:02:50.374438938 +0000 UTC m=+35.449002950" lastFinishedPulling="2025-01-30 13:02:53.14262842 +0000 UTC m=+38.217192432" observedRunningTime="2025-01-30 13:02:53.281572693 +0000 UTC m=+38.356136705" watchObservedRunningTime="2025-01-30 13:02:53.299576317 +0000 UTC m=+38.374140329" Jan 30 13:02:53.301409 kubelet[2447]: I0130 13:02:53.301340 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-k4kch" podStartSLOduration=32.301317195 podStartE2EDuration="32.301317195s" podCreationTimestamp="2025-01-30 13:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:02:53.299204237 +0000 UTC m=+38.373768249" watchObservedRunningTime="2025-01-30 13:02:53.301317195 +0000 UTC m=+38.375881167" Jan 30 13:02:53.943987 systemd-networkd[1371]: calib9e90fec21e: Gained IPv6LL Jan 30 13:02:54.067502 containerd[1447]: time="2025-01-30T13:02:54.067282339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:54.068676 containerd[1447]: time="2025-01-30T13:02:54.068431138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 13:02:54.069600 containerd[1447]: time="2025-01-30T13:02:54.069561577Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:54.075479 containerd[1447]: time="2025-01-30T13:02:54.075431612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:54.076305 containerd[1447]: time="2025-01-30T13:02:54.076271611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 932.890152ms" Jan 30 13:02:54.076412 containerd[1447]: time="2025-01-30T13:02:54.076319451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 13:02:54.078469 containerd[1447]: time="2025-01-30T13:02:54.077608610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:02:54.078987 containerd[1447]: time="2025-01-30T13:02:54.078956769Z" level=info msg="CreateContainer within sandbox \"8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:02:54.097704 containerd[1447]: time="2025-01-30T13:02:54.097649313Z" level=info msg="CreateContainer within sandbox \"8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2d6fd6855e165a30cd56bf2086c8e92e526051f7b27566d7727b43e95bb03958\"" Jan 30 13:02:54.098423 containerd[1447]: time="2025-01-30T13:02:54.098367112Z" level=info msg="StartContainer for \"2d6fd6855e165a30cd56bf2086c8e92e526051f7b27566d7727b43e95bb03958\"" Jan 30 13:02:54.136853 systemd[1]: Started cri-containerd-2d6fd6855e165a30cd56bf2086c8e92e526051f7b27566d7727b43e95bb03958.scope - libcontainer container 2d6fd6855e165a30cd56bf2086c8e92e526051f7b27566d7727b43e95bb03958. Jan 30 13:02:54.166492 containerd[1447]: time="2025-01-30T13:02:54.166442014Z" level=info msg="StartContainer for \"2d6fd6855e165a30cd56bf2086c8e92e526051f7b27566d7727b43e95bb03958\" returns successfully" Jan 30 13:02:54.240059 kubelet[2447]: I0130 13:02:54.239138 2447 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:02:54.240059 kubelet[2447]: E0130 13:02:54.239510 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:54.240059 kubelet[2447]: E0130 13:02:54.239641 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:54.318104 containerd[1447]: time="2025-01-30T13:02:54.317645724Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:54.318332 containerd[1447]: time="2025-01-30T13:02:54.318302444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:02:54.320817 containerd[1447]: time="2025-01-30T13:02:54.320772642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 243.039192ms" Jan 30 13:02:54.320817 containerd[1447]: time="2025-01-30T13:02:54.320811882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 13:02:54.322059 containerd[1447]: time="2025-01-30T13:02:54.322027001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:02:54.323290 containerd[1447]: time="2025-01-30T13:02:54.323255240Z" level=info msg="CreateContainer within sandbox \"c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:02:54.336047 containerd[1447]: time="2025-01-30T13:02:54.335991989Z" level=info msg="CreateContainer within sandbox \"c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ea548d8cfa2ef54c7a560b49f1920cae135e295fd8a36b5c941a986f73a93e39\"" Jan 30 13:02:54.338687 containerd[1447]: time="2025-01-30T13:02:54.338081187Z" level=info msg="StartContainer for \"ea548d8cfa2ef54c7a560b49f1920cae135e295fd8a36b5c941a986f73a93e39\"" Jan 30 13:02:54.368824 systemd[1]: Started cri-containerd-ea548d8cfa2ef54c7a560b49f1920cae135e295fd8a36b5c941a986f73a93e39.scope - libcontainer container ea548d8cfa2ef54c7a560b49f1920cae135e295fd8a36b5c941a986f73a93e39. Jan 30 13:02:54.410796 containerd[1447]: time="2025-01-30T13:02:54.410747845Z" level=info msg="StartContainer for \"ea548d8cfa2ef54c7a560b49f1920cae135e295fd8a36b5c941a986f73a93e39\" returns successfully" Jan 30 13:02:54.455749 systemd-networkd[1371]: cali212b073d8c1: Gained IPv6LL Jan 30 13:02:54.608071 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:53030.service - OpenSSH per-connection server daemon (10.0.0.1:53030). Jan 30 13:02:54.713937 sshd[4918]: Accepted publickey for core from 10.0.0.1 port 53030 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:02:54.716037 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:54.720774 systemd-logind[1423]: New session 9 of user core. Jan 30 13:02:54.731334 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:02:55.008302 sshd[4918]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:55.013299 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:53030.service: Deactivated successfully. Jan 30 13:02:55.016670 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:02:55.017778 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:02:55.019923 systemd-logind[1423]: Removed session 9. Jan 30 13:02:55.253599 kubelet[2447]: E0130 13:02:55.253564 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:55.254961 kubelet[2447]: E0130 13:02:55.254787 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:02:55.269388 kubelet[2447]: I0130 13:02:55.268491 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cb779964f-8zwbp" podStartSLOduration=25.509520543 podStartE2EDuration="28.268473164s" podCreationTimestamp="2025-01-30 13:02:27 +0000 UTC" firstStartedPulling="2025-01-30 13:02:51.56250926 +0000 UTC m=+36.637073232" lastFinishedPulling="2025-01-30 13:02:54.321461841 +0000 UTC m=+39.396025853" observedRunningTime="2025-01-30 13:02:55.267430805 +0000 UTC m=+40.341994817" watchObservedRunningTime="2025-01-30 13:02:55.268473164 +0000 UTC m=+40.343037176" Jan 30 13:02:55.755139 containerd[1447]: time="2025-01-30T13:02:55.754900534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:55.758120 containerd[1447]: time="2025-01-30T13:02:55.758082731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 13:02:55.768475 containerd[1447]: time="2025-01-30T13:02:55.768407763Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:55.774023 containerd[1447]: time="2025-01-30T13:02:55.773923998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:55.774933 containerd[1447]: time="2025-01-30T13:02:55.774599078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.452537397s" Jan 30 13:02:55.774933 containerd[1447]: time="2025-01-30T13:02:55.774648998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 13:02:55.778814 containerd[1447]: time="2025-01-30T13:02:55.778643555Z" level=info msg="CreateContainer within sandbox \"8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:02:55.828119 containerd[1447]: time="2025-01-30T13:02:55.828060355Z" level=info msg="CreateContainer within sandbox \"8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8d27ade7cb2d4b95edbe384d9f87f25ea9e13d1183f24b5a2ace5389973e1922\"" Jan 30 13:02:55.829384 containerd[1447]: time="2025-01-30T13:02:55.829337994Z" level=info msg="StartContainer for \"8d27ade7cb2d4b95edbe384d9f87f25ea9e13d1183f24b5a2ace5389973e1922\"" Jan 30 13:02:55.884017 systemd[1]: Started cri-containerd-8d27ade7cb2d4b95edbe384d9f87f25ea9e13d1183f24b5a2ace5389973e1922.scope - libcontainer container 8d27ade7cb2d4b95edbe384d9f87f25ea9e13d1183f24b5a2ace5389973e1922. Jan 30 13:02:55.935684 containerd[1447]: time="2025-01-30T13:02:55.932202751Z" level=info msg="StartContainer for \"8d27ade7cb2d4b95edbe384d9f87f25ea9e13d1183f24b5a2ace5389973e1922\" returns successfully" Jan 30 13:02:56.132471 kubelet[2447]: I0130 13:02:56.132318 2447 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:02:56.133862 kubelet[2447]: I0130 13:02:56.133748 2447 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:02:56.259051 kubelet[2447]: I0130 13:02:56.259016 2447 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:03:00.019380 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:53038.service - OpenSSH per-connection server daemon (10.0.0.1:53038). Jan 30 13:03:00.088852 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:00.091950 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.096940 systemd-logind[1423]: New session 10 of user core. Jan 30 13:03:00.108866 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:03:00.391137 sshd[4989]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:00.407366 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:53038.service: Deactivated successfully. Jan 30 13:03:00.409841 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:03:00.411279 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:03:00.417534 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:53042.service - OpenSSH per-connection server daemon (10.0.0.1:53042). Jan 30 13:03:00.420607 systemd-logind[1423]: Removed session 10. Jan 30 13:03:00.464126 sshd[5004]: Accepted publickey for core from 10.0.0.1 port 53042 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:00.465928 sshd[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.473020 systemd-logind[1423]: New session 11 of user core. Jan 30 13:03:00.477877 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:03:00.675706 sshd[5004]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:00.687224 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:53042.service: Deactivated successfully. Jan 30 13:03:00.691413 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:03:00.693525 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:03:00.709359 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:53044.service - OpenSSH per-connection server daemon (10.0.0.1:53044). Jan 30 13:03:00.715011 systemd-logind[1423]: Removed session 11. Jan 30 13:03:00.752198 sshd[5016]: Accepted publickey for core from 10.0.0.1 port 53044 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:00.754309 sshd[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.759899 systemd-logind[1423]: New session 12 of user core. Jan 30 13:03:00.766860 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:03:00.943682 sshd[5016]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:00.948857 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:53044.service: Deactivated successfully. Jan 30 13:03:00.950689 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:03:00.952720 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:03:00.955687 systemd-logind[1423]: Removed session 12. Jan 30 13:03:02.604737 kubelet[2447]: I0130 13:03:02.604567 2447 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:03:02.634334 kubelet[2447]: I0130 13:03:02.634263 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-slr48" podStartSLOduration=30.382450928 podStartE2EDuration="34.634246025s" podCreationTimestamp="2025-01-30 13:02:28 +0000 UTC" firstStartedPulling="2025-01-30 13:02:51.52386162 +0000 UTC m=+36.598425632" lastFinishedPulling="2025-01-30 13:02:55.775656717 +0000 UTC m=+40.850220729" observedRunningTime="2025-01-30 13:02:56.278169407 +0000 UTC m=+41.352733419" watchObservedRunningTime="2025-01-30 13:03:02.634246025 +0000 UTC m=+47.708810037" Jan 30 13:03:05.962810 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:59702.service - OpenSSH per-connection server daemon (10.0.0.1:59702). Jan 30 13:03:06.000471 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 59702 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:06.002273 sshd[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:06.006118 systemd-logind[1423]: New session 13 of user core. Jan 30 13:03:06.011805 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:03:06.174846 sshd[5067]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:06.187014 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:59702.service: Deactivated successfully. Jan 30 13:03:06.188757 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:03:06.194173 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:03:06.203005 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:59718.service - OpenSSH per-connection server daemon (10.0.0.1:59718). Jan 30 13:03:06.205121 systemd-logind[1423]: Removed session 13. Jan 30 13:03:06.238190 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 59718 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:06.239799 sshd[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:06.247640 systemd-logind[1423]: New session 14 of user core. Jan 30 13:03:06.252486 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:03:06.494769 sshd[5082]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:06.506560 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:59718.service: Deactivated successfully. Jan 30 13:03:06.508396 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:03:06.510453 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:03:06.512140 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:59720.service - OpenSSH per-connection server daemon (10.0.0.1:59720). Jan 30 13:03:06.513605 systemd-logind[1423]: Removed session 14. Jan 30 13:03:06.555029 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 59720 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:06.556665 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:06.561109 systemd-logind[1423]: New session 15 of user core. Jan 30 13:03:06.573875 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:03:08.101987 sshd[5095]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:08.114900 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:59736.service - OpenSSH per-connection server daemon (10.0.0.1:59736). Jan 30 13:03:08.116392 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:59720.service: Deactivated successfully. Jan 30 13:03:08.119596 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:03:08.124392 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:03:08.132119 systemd-logind[1423]: Removed session 15. Jan 30 13:03:08.177258 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 59736 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:08.178732 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:08.184576 systemd-logind[1423]: New session 16 of user core. Jan 30 13:03:08.191970 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:03:08.557637 sshd[5116]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:08.574780 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:59736.service: Deactivated successfully. Jan 30 13:03:08.578236 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:03:08.580752 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:03:08.582096 systemd-logind[1423]: Removed session 16. Jan 30 13:03:08.592132 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:59750.service - OpenSSH per-connection server daemon (10.0.0.1:59750). Jan 30 13:03:08.626143 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 59750 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:08.628531 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:08.633878 systemd-logind[1423]: New session 17 of user core. Jan 30 13:03:08.643840 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:03:08.802691 sshd[5130]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:08.809011 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:59750.service: Deactivated successfully. Jan 30 13:03:08.811490 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:03:08.813759 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:03:08.814819 systemd-logind[1423]: Removed session 17. Jan 30 13:03:11.218105 kubelet[2447]: I0130 13:03:11.217980 2447 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:03:13.814644 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:36456.service - OpenSSH per-connection server daemon (10.0.0.1:36456). Jan 30 13:03:13.858363 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 36456 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:13.860174 sshd[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:13.864478 systemd-logind[1423]: New session 18 of user core. Jan 30 13:03:13.874789 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:03:14.057284 sshd[5171]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:14.067960 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:36456.service: Deactivated successfully. Jan 30 13:03:14.073573 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:03:14.075569 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:03:14.077270 systemd-logind[1423]: Removed session 18. Jan 30 13:03:15.028107 containerd[1447]: time="2025-01-30T13:03:15.027790299Z" level=info msg="StopPodSandbox for \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\"" Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.092 [WARNING][5202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--drzkj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"70d42510-d7d7-428a-b1a1-8b675ee51848", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a", Pod:"coredns-6f6b679f8f-drzkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali212b073d8c1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.092 [INFO][5202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.092 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" iface="eth0" netns="" Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.092 [INFO][5202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.092 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.116 [INFO][5210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.116 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.116 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.126 [WARNING][5210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.126 [INFO][5210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.128 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.134021 containerd[1447]: 2025-01-30 13:03:15.131 [INFO][5202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:03:15.134632 containerd[1447]: time="2025-01-30T13:03:15.134085075Z" level=info msg="TearDown network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\" successfully" Jan 30 13:03:15.134632 containerd[1447]: time="2025-01-30T13:03:15.134112595Z" level=info msg="StopPodSandbox for \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\" returns successfully" Jan 30 13:03:15.135243 containerd[1447]: time="2025-01-30T13:03:15.134924675Z" level=info msg="RemovePodSandbox for \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\"" Jan 30 13:03:15.145948 containerd[1447]: time="2025-01-30T13:03:15.145693153Z" level=info msg="Forcibly stopping sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\"" Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.186 [WARNING][5232] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--drzkj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"70d42510-d7d7-428a-b1a1-8b675ee51848", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253750dc5646a8da46e38cad546bbd18750fff95a32fc27211587aab7df8124a", Pod:"coredns-6f6b679f8f-drzkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali212b073d8c1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.186 [INFO][5232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.186 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" iface="eth0" netns="" Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.186 [INFO][5232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.186 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.209 [INFO][5239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.209 [INFO][5239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.209 [INFO][5239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.218 [WARNING][5239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.219 [INFO][5239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" HandleID="k8s-pod-network.4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Workload="localhost-k8s-coredns--6f6b679f8f--drzkj-eth0" Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.220 [INFO][5239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.223467 containerd[1447]: 2025-01-30 13:03:15.222 [INFO][5232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e" Jan 30 13:03:15.225179 containerd[1447]: time="2025-01-30T13:03:15.223983136Z" level=info msg="TearDown network for sandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\" successfully" Jan 30 13:03:15.267308 containerd[1447]: time="2025-01-30T13:03:15.267247846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:03:15.267420 containerd[1447]: time="2025-01-30T13:03:15.267367926Z" level=info msg="RemovePodSandbox \"4231d25d555a20808ef4a5657d6c8bf54a6941380b8035735f01656c2617a17e\" returns successfully" Jan 30 13:03:15.267960 containerd[1447]: time="2025-01-30T13:03:15.267932766Z" level=info msg="StopPodSandbox for \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\"" Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.309 [WARNING][5261] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0", GenerateName:"calico-kube-controllers-849d69c5fc-", Namespace:"calico-system", SelfLink:"", UID:"6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849d69c5fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7", Pod:"calico-kube-controllers-849d69c5fc-hlqw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5b922e1ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.309 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.309 [INFO][5261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" iface="eth0" netns="" Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.309 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.309 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.334 [INFO][5268] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.334 [INFO][5268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.334 [INFO][5268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.344 [WARNING][5268] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.344 [INFO][5268] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.348 [INFO][5268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.352383 containerd[1447]: 2025-01-30 13:03:15.350 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:03:15.352383 containerd[1447]: time="2025-01-30T13:03:15.352323747Z" level=info msg="TearDown network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\" successfully" Jan 30 13:03:15.352383 containerd[1447]: time="2025-01-30T13:03:15.352349667Z" level=info msg="StopPodSandbox for \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\" returns successfully" Jan 30 13:03:15.354525 containerd[1447]: time="2025-01-30T13:03:15.354468947Z" level=info msg="RemovePodSandbox for \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\"" Jan 30 13:03:15.354525 containerd[1447]: time="2025-01-30T13:03:15.354520107Z" level=info msg="Forcibly stopping sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\"" Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.403 [WARNING][5290] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0", GenerateName:"calico-kube-controllers-849d69c5fc-", Namespace:"calico-system", SelfLink:"", UID:"6f5a0fbe-ae0d-4b06-9fb9-d4626175bd41", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849d69c5fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6ea405ad39a9570f2e560727c79c598b9f07fdb3b73419a20036dbf629f18f7", Pod:"calico-kube-controllers-849d69c5fc-hlqw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5b922e1ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.403 [INFO][5290] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.403 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" iface="eth0" netns="" Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.403 [INFO][5290] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.403 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.423 [INFO][5298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.423 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.423 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.432 [WARNING][5298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.432 [INFO][5298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" HandleID="k8s-pod-network.a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Workload="localhost-k8s-calico--kube--controllers--849d69c5fc--hlqw7-eth0" Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.434 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.438491 containerd[1447]: 2025-01-30 13:03:15.435 [INFO][5290] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82" Jan 30 13:03:15.438491 containerd[1447]: time="2025-01-30T13:03:15.437272168Z" level=info msg="TearDown network for sandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\" successfully" Jan 30 13:03:15.440769 containerd[1447]: time="2025-01-30T13:03:15.440706888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:03:15.440927 containerd[1447]: time="2025-01-30T13:03:15.440908168Z" level=info msg="RemovePodSandbox \"a2efa84a217152daf89eb875770aef6455c7dc7562ea3d8bd6b29e561d652a82\" returns successfully" Jan 30 13:03:15.441575 containerd[1447]: time="2025-01-30T13:03:15.441552368Z" level=info msg="StopPodSandbox for \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\"" Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.477 [WARNING][5321] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0", GenerateName:"calico-apiserver-6cb779964f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb779964f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1", Pod:"calico-apiserver-6cb779964f-8zwbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ad9239eb17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.477 [INFO][5321] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.477 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" iface="eth0" netns="" Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.477 [INFO][5321] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.477 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.497 [INFO][5328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.497 [INFO][5328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.497 [INFO][5328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.505 [WARNING][5328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.505 [INFO][5328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.507 [INFO][5328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.510606 containerd[1447]: 2025-01-30 13:03:15.509 [INFO][5321] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:03:15.511161 containerd[1447]: time="2025-01-30T13:03:15.510657952Z" level=info msg="TearDown network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\" successfully" Jan 30 13:03:15.511161 containerd[1447]: time="2025-01-30T13:03:15.510685872Z" level=info msg="StopPodSandbox for \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\" returns successfully" Jan 30 13:03:15.511713 containerd[1447]: time="2025-01-30T13:03:15.511376712Z" level=info msg="RemovePodSandbox for \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\"" Jan 30 13:03:15.511713 containerd[1447]: time="2025-01-30T13:03:15.511412952Z" level=info msg="Forcibly stopping sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\"" Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.547 [WARNING][5350] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0", GenerateName:"calico-apiserver-6cb779964f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cc3298b-ef9c-4aeb-903e-4a8f5c9daf9d", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb779964f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9f170d5110f0f9df66091317d68e259e896724fc591b94eac2b76aa286b8ec1", Pod:"calico-apiserver-6cb779964f-8zwbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ad9239eb17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.548 [INFO][5350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.548 [INFO][5350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" iface="eth0" netns="" Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.548 [INFO][5350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.548 [INFO][5350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.570 [INFO][5358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.570 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.570 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.579 [WARNING][5358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.579 [INFO][5358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" HandleID="k8s-pod-network.73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Workload="localhost-k8s-calico--apiserver--6cb779964f--8zwbp-eth0" Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.583 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.587403 containerd[1447]: 2025-01-30 13:03:15.585 [INFO][5350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc" Jan 30 13:03:15.588473 containerd[1447]: time="2025-01-30T13:03:15.587985015Z" level=info msg="TearDown network for sandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\" successfully" Jan 30 13:03:15.591924 containerd[1447]: time="2025-01-30T13:03:15.591878734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:03:15.592472 containerd[1447]: time="2025-01-30T13:03:15.592109494Z" level=info msg="RemovePodSandbox \"73e6685a3f4e555815fd515090951c9e9eec92296cc236a94a5f1c9fdca86dcc\" returns successfully" Jan 30 13:03:15.592821 containerd[1447]: time="2025-01-30T13:03:15.592796454Z" level=info msg="StopPodSandbox for \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\"" Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.632 [WARNING][5380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--k4kch-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c604cc46-99d4-4353-8500-4bb310160935", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e", Pod:"coredns-6f6b679f8f-k4kch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9e90fec21e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.632 [INFO][5380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.632 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" iface="eth0" netns="" Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.632 [INFO][5380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.632 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.658 [INFO][5387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.659 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.659 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.672 [WARNING][5387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.672 [INFO][5387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.674 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.678318 containerd[1447]: 2025-01-30 13:03:15.676 [INFO][5380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:03:15.678318 containerd[1447]: time="2025-01-30T13:03:15.678209795Z" level=info msg="TearDown network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\" successfully" Jan 30 13:03:15.678318 containerd[1447]: time="2025-01-30T13:03:15.678236115Z" level=info msg="StopPodSandbox for \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\" returns successfully" Jan 30 13:03:15.679506 containerd[1447]: time="2025-01-30T13:03:15.679469875Z" level=info msg="RemovePodSandbox for \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\"" Jan 30 13:03:15.679562 containerd[1447]: time="2025-01-30T13:03:15.679512835Z" level=info msg="Forcibly stopping sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\"" Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.720 [WARNING][5409] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--k4kch-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c604cc46-99d4-4353-8500-4bb310160935", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d251903f1040fc7ad34f10b5b79cb8db9d777dfb15dfe894af79d2d69a4ac74e", Pod:"coredns-6f6b679f8f-k4kch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9e90fec21e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.720 [INFO][5409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.720 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" iface="eth0" netns="" Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.720 [INFO][5409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.720 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.739 [INFO][5417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.739 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.740 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.749 [WARNING][5417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.749 [INFO][5417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" HandleID="k8s-pod-network.46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Workload="localhost-k8s-coredns--6f6b679f8f--k4kch-eth0" Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.751 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.754228 containerd[1447]: 2025-01-30 13:03:15.752 [INFO][5409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b" Jan 30 13:03:15.754746 containerd[1447]: time="2025-01-30T13:03:15.754339178Z" level=info msg="TearDown network for sandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\" successfully" Jan 30 13:03:15.784279 containerd[1447]: time="2025-01-30T13:03:15.784186972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:03:15.784700 containerd[1447]: time="2025-01-30T13:03:15.784292012Z" level=info msg="RemovePodSandbox \"46d93fa456940183af25a2d8b69c33bc9abfeba16d9106ec032afae4f6888f7b\" returns successfully" Jan 30 13:03:15.784925 containerd[1447]: time="2025-01-30T13:03:15.784886292Z" level=info msg="StopPodSandbox for \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\"" Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.821 [WARNING][5440] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--slr48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d", Pod:"csi-node-driver-slr48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali978ee8f3de7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.821 [INFO][5440] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.821 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" iface="eth0" netns="" Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.821 [INFO][5440] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.821 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.841 [INFO][5448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.841 [INFO][5448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.841 [INFO][5448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.851 [WARNING][5448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.851 [INFO][5448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.853 [INFO][5448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.856515 containerd[1447]: 2025-01-30 13:03:15.854 [INFO][5440] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:03:15.856515 containerd[1447]: time="2025-01-30T13:03:15.856467836Z" level=info msg="TearDown network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\" successfully" Jan 30 13:03:15.857264 containerd[1447]: time="2025-01-30T13:03:15.856926876Z" level=info msg="StopPodSandbox for \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\" returns successfully" Jan 30 13:03:15.857992 containerd[1447]: time="2025-01-30T13:03:15.857662916Z" level=info msg="RemovePodSandbox for \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\"" Jan 30 13:03:15.857992 containerd[1447]: time="2025-01-30T13:03:15.857698996Z" level=info msg="Forcibly stopping sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\"" Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.899 [WARNING][5470] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--slr48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9f4c40c-83e4-4e09-bcf8-7d4d055d34c9", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8392468656b32d926bc08bb347d0faaf0648b19b0f982248d1923b950701021d", Pod:"csi-node-driver-slr48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali978ee8f3de7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.900 [INFO][5470] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.900 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" iface="eth0" netns="" Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.900 [INFO][5470] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.900 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.923 [INFO][5478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.924 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.924 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.933 [WARNING][5478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.933 [INFO][5478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" HandleID="k8s-pod-network.e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Workload="localhost-k8s-csi--node--driver--slr48-eth0" Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.934 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:15.937831 containerd[1447]: 2025-01-30 13:03:15.936 [INFO][5470] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716" Jan 30 13:03:15.937831 containerd[1447]: time="2025-01-30T13:03:15.937745818Z" level=info msg="TearDown network for sandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\" successfully" Jan 30 13:03:15.946451 containerd[1447]: time="2025-01-30T13:03:15.946400136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:03:15.946583 containerd[1447]: time="2025-01-30T13:03:15.946508456Z" level=info msg="RemovePodSandbox \"e4dd4ad6126e00360bfc4071a74b5835efabbe556438724ffb9eb180e0498716\" returns successfully" Jan 30 13:03:15.947366 containerd[1447]: time="2025-01-30T13:03:15.947030256Z" level=info msg="StopPodSandbox for \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\"" Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:15.990 [WARNING][5501] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0", GenerateName:"calico-apiserver-6cb779964f-", Namespace:"calico-apiserver", SelfLink:"", UID:"375a330c-8230-4057-b70e-a0f2609c831f", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb779964f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0", Pod:"calico-apiserver-6cb779964f-9bb6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecbae0db9ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:15.990 [INFO][5501] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:15.990 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" iface="eth0" netns="" Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:15.990 [INFO][5501] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:15.990 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:16.016 [INFO][5509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:16.016 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:16.016 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:16.027 [WARNING][5509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:16.027 [INFO][5509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:16.032 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:16.036370 containerd[1447]: 2025-01-30 13:03:16.034 [INFO][5501] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:03:16.036370 containerd[1447]: time="2025-01-30T13:03:16.036213157Z" level=info msg="TearDown network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\" successfully" Jan 30 13:03:16.036370 containerd[1447]: time="2025-01-30T13:03:16.036242437Z" level=info msg="StopPodSandbox for \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\" returns successfully" Jan 30 13:03:16.037572 containerd[1447]: time="2025-01-30T13:03:16.037539236Z" level=info msg="RemovePodSandbox for \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\"" Jan 30 13:03:16.037653 containerd[1447]: time="2025-01-30T13:03:16.037577756Z" level=info msg="Forcibly stopping sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\"" Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.078 [WARNING][5549] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0", GenerateName:"calico-apiserver-6cb779964f-", Namespace:"calico-apiserver", SelfLink:"", UID:"375a330c-8230-4057-b70e-a0f2609c831f", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 2, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb779964f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec132c2b7f9a491a92d438ecafc5a14c1c5aae2b8254433f1c6c7519153f6fa0", Pod:"calico-apiserver-6cb779964f-9bb6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecbae0db9ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.078 [INFO][5549] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.078 [INFO][5549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" iface="eth0" netns="" Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.078 [INFO][5549] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.078 [INFO][5549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.102 [INFO][5560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.102 [INFO][5560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.102 [INFO][5560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.114 [WARNING][5560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.114 [INFO][5560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" HandleID="k8s-pod-network.0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Workload="localhost-k8s-calico--apiserver--6cb779964f--9bb6z-eth0" Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.118 [INFO][5560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:16.124477 containerd[1447]: 2025-01-30 13:03:16.122 [INFO][5549] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab" Jan 30 13:03:16.124915 containerd[1447]: time="2025-01-30T13:03:16.124520058Z" level=info msg="TearDown network for sandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\" successfully" Jan 30 13:03:16.129163 containerd[1447]: time="2025-01-30T13:03:16.129108857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:03:16.129572 containerd[1447]: time="2025-01-30T13:03:16.129181337Z" level=info msg="RemovePodSandbox \"0b829235a0188946a6449c9e3ddc7b91946992c18efc247bd07d9cc756ab37ab\" returns successfully" Jan 30 13:03:19.072451 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:36460.service - OpenSSH per-connection server daemon (10.0.0.1:36460). Jan 30 13:03:19.121470 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 36460 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:19.123047 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:19.128166 systemd-logind[1423]: New session 19 of user core. Jan 30 13:03:19.137840 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:03:19.344405 sshd[5568]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:19.349238 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:36460.service: Deactivated successfully. Jan 30 13:03:19.351779 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:03:19.354276 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:03:19.355685 systemd-logind[1423]: Removed session 19. Jan 30 13:03:24.356020 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:43986.service - OpenSSH per-connection server daemon (10.0.0.1:43986). Jan 30 13:03:24.399926 sshd[5585]: Accepted publickey for core from 10.0.0.1 port 43986 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:03:24.401591 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:24.409180 systemd-logind[1423]: New session 20 of user core. Jan 30 13:03:24.419886 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:03:24.589297 sshd[5585]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:24.593320 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:43986.service: Deactivated successfully. Jan 30 13:03:24.597508 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:03:24.599520 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:03:24.602384 systemd-logind[1423]: Removed session 20. Jan 30 13:03:25.036155 kubelet[2447]: E0130 13:03:25.036102 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"