Jan 13 21:13:59.931441 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 21:13:59.931462 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:13:59.931472 kernel: KASLR enabled Jan 13 21:13:59.931478 kernel: efi: EFI v2.7 by EDK II Jan 13 21:13:59.931484 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 13 21:13:59.931490 kernel: random: crng init done Jan 13 21:13:59.931498 kernel: ACPI: Early table checksum verification disabled Jan 13 21:13:59.931504 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 13 21:13:59.931511 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:13:59.931519 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931525 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931532 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931538 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931544 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931552 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931560 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931567 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931574 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:13:59.931580 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 21:13:59.931587 kernel: NUMA: Failed to initialise from firmware Jan 13 21:13:59.931594 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:13:59.931600 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 13 21:13:59.931607 kernel: Zone ranges: Jan 13 21:13:59.931613 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:13:59.931620 kernel: DMA32 empty Jan 13 21:13:59.931628 kernel: Normal empty Jan 13 21:13:59.931634 kernel: Movable zone start for each node Jan 13 21:13:59.931641 kernel: Early memory node ranges Jan 13 21:13:59.931647 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 21:13:59.931654 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 21:13:59.931660 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 21:13:59.931667 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 21:13:59.931673 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 21:13:59.931680 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 21:13:59.931687 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 21:13:59.931693 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:13:59.931700 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 21:13:59.931708 kernel: psci: probing for conduit method from ACPI. Jan 13 21:13:59.931715 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 21:13:59.931721 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:13:59.931731 kernel: psci: Trusted OS migration not required Jan 13 21:13:59.931738 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:13:59.931745 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 21:13:59.931761 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:13:59.931769 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:13:59.931776 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 21:13:59.931783 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:13:59.931790 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:13:59.931804 kernel: CPU features: detected: Hardware dirty bit management Jan 13 21:13:59.931812 kernel: CPU features: detected: Spectre-v4 Jan 13 21:13:59.931819 kernel: CPU features: detected: Spectre-BHB Jan 13 21:13:59.931826 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 21:13:59.931833 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 21:13:59.931842 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 21:13:59.931849 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 21:13:59.931856 kernel: alternatives: applying boot alternatives Jan 13 21:13:59.931864 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:13:59.931871 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:13:59.931878 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:13:59.931885 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:13:59.931892 kernel: Fallback order for Node 0: 0 Jan 13 21:13:59.931900 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 21:13:59.931907 kernel: Policy zone: DMA Jan 13 21:13:59.931914 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:13:59.931922 kernel: software IO TLB: area num 4. Jan 13 21:13:59.931929 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 21:13:59.931937 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 13 21:13:59.931944 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:13:59.931951 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:13:59.931959 kernel: rcu: RCU event tracing is enabled. Jan 13 21:13:59.931966 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:13:59.931973 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:13:59.931980 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:13:59.931987 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:13:59.931994 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:13:59.932001 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:13:59.932010 kernel: GICv3: 256 SPIs implemented Jan 13 21:13:59.932017 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:13:59.932024 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:13:59.932031 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 21:13:59.932038 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 21:13:59.932045 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 21:13:59.932052 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:13:59.932059 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:13:59.932066 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 21:13:59.932073 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 21:13:59.932080 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:13:59.932089 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:13:59.932096 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 21:13:59.932103 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 21:13:59.932110 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 21:13:59.932117 kernel: arm-pv: using stolen time PV Jan 13 21:13:59.932125 kernel: Console: colour dummy device 80x25 Jan 13 21:13:59.932132 kernel: ACPI: Core revision 20230628 Jan 13 21:13:59.932139 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 21:13:59.932146 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:13:59.932154 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:13:59.932162 kernel: landlock: Up and running. Jan 13 21:13:59.932169 kernel: SELinux: Initializing. Jan 13 21:13:59.932176 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:13:59.932184 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:13:59.932191 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:13:59.932198 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:13:59.932206 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:13:59.932213 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:13:59.932220 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 21:13:59.932229 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 21:13:59.932236 kernel: Remapping and enabling EFI services. Jan 13 21:13:59.932243 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:13:59.932251 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:13:59.932258 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 21:13:59.932266 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 21:13:59.932273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:13:59.932280 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 21:13:59.932288 kernel: Detected PIPT I-cache on CPU2 Jan 13 21:13:59.932295 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 21:13:59.932304 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 21:13:59.932311 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:13:59.932323 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 21:13:59.932332 kernel: Detected PIPT I-cache on CPU3 Jan 13 21:13:59.932339 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 21:13:59.932347 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 21:13:59.932355 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:13:59.932362 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 21:13:59.932370 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:13:59.932379 kernel: SMP: Total of 4 processors activated. Jan 13 21:13:59.932386 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:13:59.932394 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 21:13:59.932402 kernel: CPU features: detected: Common not Private translations Jan 13 21:13:59.932409 kernel: CPU features: detected: CRC32 instructions Jan 13 21:13:59.932417 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 21:13:59.932424 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 21:13:59.932432 kernel: CPU features: detected: LSE atomic instructions Jan 13 21:13:59.932440 kernel: CPU features: detected: Privileged Access Never Jan 13 21:13:59.932448 kernel: CPU features: detected: RAS Extension Support Jan 13 21:13:59.932456 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 21:13:59.932463 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:13:59.932471 kernel: alternatives: applying system-wide alternatives Jan 13 21:13:59.932478 kernel: devtmpfs: initialized Jan 13 21:13:59.932486 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:13:59.932494 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:13:59.932502 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:13:59.932511 kernel: SMBIOS 3.0.0 present. Jan 13 21:13:59.932519 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 13 21:13:59.932527 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:13:59.932534 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:13:59.932542 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:13:59.932550 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:13:59.932558 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:13:59.932565 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 13 21:13:59.932573 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:13:59.932582 kernel: cpuidle: using governor menu Jan 13 21:13:59.932590 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:13:59.932597 kernel: ASID allocator initialised with 32768 entries Jan 13 21:13:59.932605 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:13:59.932613 kernel: Serial: AMBA PL011 UART driver Jan 13 21:13:59.932620 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 21:13:59.932628 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 21:13:59.932636 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:13:59.932643 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:13:59.932652 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:13:59.932660 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:13:59.932667 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:13:59.932675 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:13:59.932683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:13:59.932690 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:13:59.932698 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:13:59.932705 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:13:59.932713 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:13:59.932722 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:13:59.932729 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:13:59.932737 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:13:59.932744 kernel: ACPI: Interpreter enabled Jan 13 21:13:59.932756 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:13:59.932765 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:13:59.932773 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 21:13:59.932780 kernel: printk: console [ttyAMA0] enabled Jan 13 21:13:59.932788 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:13:59.932931 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:13:59.933011 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:13:59.933082 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:13:59.933152 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 21:13:59.933217 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 21:13:59.933228 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 21:13:59.933235 kernel: PCI host bridge to bus 0000:00 Jan 13 21:13:59.933310 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 21:13:59.933372 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:13:59.933433 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 21:13:59.933493 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:13:59.933576 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 21:13:59.933652 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:13:59.933723 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 21:13:59.933852 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 21:13:59.933944 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:13:59.934012 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:13:59.934078 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 21:13:59.934145 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 21:13:59.934206 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 21:13:59.934269 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:13:59.934327 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 21:13:59.934337 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:13:59.934345 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:13:59.934353 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:13:59.934360 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:13:59.934368 kernel: iommu: Default domain type: Translated Jan 13 21:13:59.934388 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:13:59.934397 kernel: efivars: Registered efivars operations Jan 13 21:13:59.934405 kernel: vgaarb: loaded Jan 13 21:13:59.934412 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:13:59.934426 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:13:59.934437 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:13:59.934446 kernel: pnp: PnP ACPI init Jan 13 21:13:59.934561 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 21:13:59.934575 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:13:59.934582 kernel: NET: Registered PF_INET protocol family Jan 13 21:13:59.934593 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:13:59.934600 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:13:59.934608 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:13:59.934616 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:13:59.934623 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:13:59.934631 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:13:59.934638 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:13:59.934646 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:13:59.934653 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:13:59.934662 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:13:59.934669 kernel: kvm [1]: HYP mode not available Jan 13 21:13:59.934677 kernel: Initialise system trusted keyrings Jan 13 21:13:59.934684 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:13:59.934692 kernel: Key type asymmetric registered Jan 13 21:13:59.934699 kernel: Asymmetric key parser 'x509' registered Jan 13 21:13:59.934707 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:13:59.934714 kernel: io scheduler mq-deadline registered Jan 13 21:13:59.934722 kernel: io scheduler kyber registered Jan 13 21:13:59.934730 kernel: io scheduler bfq registered Jan 13 21:13:59.934738 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:13:59.934745 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:13:59.934760 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:13:59.934844 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 21:13:59.934855 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:13:59.934862 kernel: thunder_xcv, ver 1.0 Jan 13 21:13:59.934870 kernel: thunder_bgx, ver 1.0 Jan 13 21:13:59.934877 kernel: nicpf, ver 1.0 Jan 13 21:13:59.934886 kernel: nicvf, ver 1.0 Jan 13 21:13:59.934962 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:13:59.935025 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:13:59 UTC (1736802839) Jan 13 21:13:59.935035 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:13:59.935042 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 21:13:59.935050 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:13:59.935057 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:13:59.935065 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:13:59.935074 kernel: Segment Routing with IPv6 Jan 13 21:13:59.935082 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:13:59.935090 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:13:59.935097 kernel: Key type dns_resolver registered Jan 13 21:13:59.935105 kernel: registered taskstats version 1 Jan 13 21:13:59.935112 kernel: Loading compiled-in X.509 certificates Jan 13 21:13:59.935120 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:13:59.935128 kernel: Key type .fscrypt registered Jan 13 21:13:59.935136 kernel: Key type fscrypt-provisioning registered Jan 13 21:13:59.935145 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:13:59.935153 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:13:59.935160 kernel: ima: No architecture policies found Jan 13 21:13:59.935167 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:13:59.935175 kernel: clk: Disabling unused clocks Jan 13 21:13:59.935182 kernel: Freeing unused kernel memory: 39360K Jan 13 21:13:59.935189 kernel: Run /init as init process Jan 13 21:13:59.935197 kernel: with arguments: Jan 13 21:13:59.935204 kernel: /init Jan 13 21:13:59.935212 kernel: with environment: Jan 13 21:13:59.935219 kernel: HOME=/ Jan 13 21:13:59.935227 kernel: TERM=linux Jan 13 21:13:59.935234 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:13:59.935243 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:13:59.935252 systemd[1]: Detected virtualization kvm. Jan 13 21:13:59.935260 systemd[1]: Detected architecture arm64. Jan 13 21:13:59.935269 systemd[1]: Running in initrd. Jan 13 21:13:59.935277 systemd[1]: No hostname configured, using default hostname. Jan 13 21:13:59.935285 systemd[1]: Hostname set to . Jan 13 21:13:59.935293 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:13:59.935301 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:13:59.935309 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:13:59.935317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:13:59.935326 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:13:59.935335 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:13:59.935343 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:13:59.935351 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:13:59.935361 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:13:59.935369 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:13:59.935377 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:13:59.935385 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:13:59.935394 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:13:59.935402 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:13:59.935410 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:13:59.935418 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:13:59.935426 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:13:59.935434 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:13:59.935442 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:13:59.935450 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:13:59.935458 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:13:59.935467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:13:59.935475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:13:59.935483 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:13:59.935491 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:13:59.935499 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:13:59.935507 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:13:59.935515 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:13:59.935523 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:13:59.935533 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:13:59.935569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:13:59.935578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:13:59.935586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:13:59.935594 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:13:59.935603 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:13:59.935613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:13:59.935639 systemd-journald[237]: Collecting audit messages is disabled. Jan 13 21:13:59.935658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:13:59.935668 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:13:59.935677 systemd-journald[237]: Journal started Jan 13 21:13:59.935696 systemd-journald[237]: Runtime Journal (/run/log/journal/38df085c6949496aaa5322d717fb6e7d) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:13:59.928524 systemd-modules-load[238]: Inserted module 'overlay' Jan 13 21:13:59.939293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:13:59.941287 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:13:59.944072 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:13:59.947942 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:13:59.949370 kernel: Bridge firewalling registered Jan 13 21:13:59.948766 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 13 21:13:59.949128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:13:59.950497 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:13:59.964962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:13:59.966165 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:13:59.968262 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:13:59.973027 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:13:59.975839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:13:59.990946 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:14:00.000874 dracut-cmdline[273]: dracut-dracut-053 Jan 13 21:14:00.003431 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:14:00.020952 systemd-resolved[275]: Positive Trust Anchors: Jan 13 21:14:00.020965 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:14:00.020996 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:14:00.025620 systemd-resolved[275]: Defaulting to hostname 'linux'. Jan 13 21:14:00.026567 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:14:00.030290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:14:00.074822 kernel: SCSI subsystem initialized Jan 13 21:14:00.078821 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:14:00.086844 kernel: iscsi: registered transport (tcp) Jan 13 21:14:00.099834 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:14:00.099853 kernel: QLogic iSCSI HBA Driver Jan 13 21:14:00.141521 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:14:00.150959 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:14:00.165827 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:14:00.165857 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:14:00.167836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:14:00.212831 kernel: raid6: neonx8 gen() 15711 MB/s Jan 13 21:14:00.229824 kernel: raid6: neonx4 gen() 15541 MB/s Jan 13 21:14:00.246824 kernel: raid6: neonx2 gen() 13136 MB/s Jan 13 21:14:00.263823 kernel: raid6: neonx1 gen() 10454 MB/s Jan 13 21:14:00.280819 kernel: raid6: int64x8 gen() 6960 MB/s Jan 13 21:14:00.297815 kernel: raid6: int64x4 gen() 7334 MB/s Jan 13 21:14:00.314813 kernel: raid6: int64x2 gen() 6131 MB/s Jan 13 21:14:00.331900 kernel: raid6: int64x1 gen() 5056 MB/s Jan 13 21:14:00.331915 kernel: raid6: using algorithm neonx8 gen() 15711 MB/s Jan 13 21:14:00.349895 kernel: raid6: .... xor() 11918 MB/s, rmw enabled Jan 13 21:14:00.349907 kernel: raid6: using neon recovery algorithm Jan 13 21:14:00.355201 kernel: xor: measuring software checksum speed Jan 13 21:14:00.355220 kernel: 8regs : 19797 MB/sec Jan 13 21:14:00.355875 kernel: 32regs : 19674 MB/sec Jan 13 21:14:00.357105 kernel: arm64_neon : 26787 MB/sec Jan 13 21:14:00.357117 kernel: xor: using function: arm64_neon (26787 MB/sec) Jan 13 21:14:00.406828 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:14:00.417295 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:14:00.428938 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:14:00.439787 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jan 13 21:14:00.442853 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:14:00.452952 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:14:00.464153 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 13 21:14:00.489573 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:14:00.496949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:14:00.536466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:14:00.546026 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:14:00.558246 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:14:00.559732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:14:00.562284 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:14:00.564758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:14:00.572942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:14:00.583850 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:14:00.589749 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 21:14:00.596272 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:14:00.596379 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:14:00.596390 kernel: GPT:9289727 != 19775487 Jan 13 21:14:00.596399 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:14:00.596409 kernel: GPT:9289727 != 19775487 Jan 13 21:14:00.596420 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:14:00.596430 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:14:00.598582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:14:00.598692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:14:00.602386 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:14:00.603626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:14:00.603768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:14:00.606036 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:14:00.618838 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Jan 13 21:14:00.622816 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (506) Jan 13 21:14:00.622104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:14:00.637689 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:14:00.640857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:14:00.645784 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:14:00.650474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:14:00.654416 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:14:00.655631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:14:00.669950 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:14:00.671765 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:14:00.677913 disk-uuid[551]: Primary Header is updated. Jan 13 21:14:00.677913 disk-uuid[551]: Secondary Entries is updated. Jan 13 21:14:00.677913 disk-uuid[551]: Secondary Header is updated. Jan 13 21:14:00.688024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:14:00.692847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:14:00.696824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:14:01.694208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:14:01.694273 disk-uuid[552]: The operation has completed successfully. Jan 13 21:14:01.720170 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:14:01.720265 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:14:01.742955 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:14:01.745812 sh[572]: Success Jan 13 21:14:01.758953 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:14:01.790508 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:14:01.810168 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:14:01.811625 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:14:01.823447 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:14:01.823502 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:14:01.823513 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:14:01.824580 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:14:01.825387 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:14:01.829612 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:14:01.830995 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:14:01.843976 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:14:01.846469 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:14:01.855819 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:14:01.855866 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:14:01.855877 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:14:01.858819 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:14:01.867199 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:14:01.869148 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:14:01.874546 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:14:01.883001 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:14:01.945160 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:14:01.954076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:14:01.984507 systemd-networkd[761]: lo: Link UP Jan 13 21:14:01.984628 ignition[667]: Ignition 2.19.0 Jan 13 21:14:01.984515 systemd-networkd[761]: lo: Gained carrier Jan 13 21:14:01.984635 ignition[667]: Stage: fetch-offline Jan 13 21:14:01.985208 systemd-networkd[761]: Enumeration completed Jan 13 21:14:01.984666 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:14:01.985468 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:14:01.984674 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:14:01.985911 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:14:01.984909 ignition[667]: parsed url from cmdline: "" Jan 13 21:14:01.985915 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:14:01.984913 ignition[667]: no config URL provided Jan 13 21:14:01.987404 systemd[1]: Reached target network.target - Network. Jan 13 21:14:01.984917 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:14:01.987594 systemd-networkd[761]: eth0: Link UP Jan 13 21:14:01.984925 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:14:01.987597 systemd-networkd[761]: eth0: Gained carrier Jan 13 21:14:01.984947 ignition[667]: op(1): [started] loading QEMU firmware config module Jan 13 21:14:01.987606 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:14:01.984953 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:14:02.001840 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:14:01.992324 ignition[667]: op(1): [finished] loading QEMU firmware config module Jan 13 21:14:01.992344 ignition[667]: QEMU firmware config was not found. Ignoring... Jan 13 21:14:02.042874 ignition[667]: parsing config with SHA512: 92fe687689f81d0f153b81f39e49f6ee1e2c151f3a026c645adc089ac03d205e40c36e37a828e8babc031cdb9e7d9c54315c2f732ad1230b9ca2e9f56b9d1764 Jan 13 21:14:02.047845 unknown[667]: fetched base config from "system" Jan 13 21:14:02.048402 ignition[667]: fetch-offline: fetch-offline passed Jan 13 21:14:02.047855 unknown[667]: fetched user config from "qemu" Jan 13 21:14:02.048479 ignition[667]: Ignition finished successfully Jan 13 21:14:02.051840 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:14:02.053179 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:14:02.060983 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:14:02.071431 ignition[767]: Ignition 2.19.0 Jan 13 21:14:02.071441 ignition[767]: Stage: kargs Jan 13 21:14:02.071616 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:14:02.071625 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:14:02.075492 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:14:02.072564 ignition[767]: kargs: kargs passed Jan 13 21:14:02.072609 ignition[767]: Ignition finished successfully Jan 13 21:14:02.081022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:14:02.091814 ignition[774]: Ignition 2.19.0 Jan 13 21:14:02.091825 ignition[774]: Stage: disks Jan 13 21:14:02.091998 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:14:02.092008 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:14:02.094460 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:14:02.092848 ignition[774]: disks: disks passed Jan 13 21:14:02.096208 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:14:02.092891 ignition[774]: Ignition finished successfully Jan 13 21:14:02.097894 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:14:02.099531 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:14:02.101429 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:14:02.103087 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:14:02.105866 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:14:02.120104 systemd-fsck[785]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:14:02.124394 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:14:02.135917 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:14:02.177744 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:14:02.179304 kernel: EXT4-fs (vda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:14:02.179048 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:14:02.190899 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:14:02.192713 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:14:02.194328 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:14:02.194369 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:14:02.204475 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (793) Jan 13 21:14:02.204499 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:14:02.204510 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:14:02.204520 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:14:02.194391 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:14:02.207460 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:14:02.202336 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:14:02.207149 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:14:02.212361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:14:02.248252 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:14:02.251759 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:14:02.255765 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:14:02.259479 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:14:02.331819 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:14:02.345936 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:14:02.347544 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:14:02.352833 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:14:02.367250 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:14:02.371199 ignition[907]: INFO : Ignition 2.19.0 Jan 13 21:14:02.371199 ignition[907]: INFO : Stage: mount Jan 13 21:14:02.372777 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:14:02.372777 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:14:02.372777 ignition[907]: INFO : mount: mount passed Jan 13 21:14:02.372777 ignition[907]: INFO : Ignition finished successfully Jan 13 21:14:02.374143 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:14:02.396023 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:14:02.822138 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:14:02.830988 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:14:02.836865 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (920) Jan 13 21:14:02.839133 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:14:02.839150 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:14:02.839160 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:14:02.842822 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:14:02.843435 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:14:02.859592 ignition[937]: INFO : Ignition 2.19.0 Jan 13 21:14:02.859592 ignition[937]: INFO : Stage: files Jan 13 21:14:02.861378 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:14:02.861378 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:14:02.861378 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:14:02.865155 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:14:02.865155 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:14:02.865155 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:14:02.865155 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:14:02.865155 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:14:02.864699 unknown[937]: wrote ssh authorized keys file for user: core Jan 13 21:14:02.873061 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:14:02.873061 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:14:02.944397 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:14:03.093274 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:14:03.093274 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:14:03.097192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 21:14:03.432160 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:14:03.497015 systemd-networkd[761]: eth0: Gained IPv6LL Jan 13 21:14:04.123427 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:14:04.123427 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:14:04.127286 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:14:04.127286 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:14:04.127286 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:14:04.127286 ignition[937]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 21:14:04.127286 ignition[937]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:14:04.127286 ignition[937]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:14:04.127286 ignition[937]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 21:14:04.127286 ignition[937]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:14:04.153977 ignition[937]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:14:04.157649 ignition[937]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:14:04.160187 ignition[937]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:14:04.160187 ignition[937]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:14:04.160187 ignition[937]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:14:04.160187 ignition[937]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:14:04.160187 ignition[937]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:14:04.160187 ignition[937]: INFO : files: files passed Jan 13 21:14:04.160187 ignition[937]: INFO : Ignition finished successfully Jan 13 21:14:04.161051 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:14:04.175025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:14:04.177740 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:14:04.179260 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:14:04.179351 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:14:04.185908 initrd-setup-root-after-ignition[966]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:14:04.188953 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:14:04.188953 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:14:04.192055 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:14:04.195088 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:14:04.196637 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:14:04.212978 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:14:04.232249 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:14:04.232388 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:14:04.234684 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:14:04.236551 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:14:04.238518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:14:04.239366 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:14:04.255117 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:14:04.272986 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:14:04.282921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:14:04.284314 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:14:04.286787 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:14:04.288603 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:14:04.288739 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:14:04.291232 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:14:04.293257 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:14:04.294900 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:14:04.296626 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:14:04.298606 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:14:04.300604 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:14:04.302486 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:14:04.304498 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:14:04.306483 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:14:04.308251 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:14:04.309774 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:14:04.309926 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:14:04.312325 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:14:04.314282 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:14:04.316200 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:14:04.316866 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:14:04.318315 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:14:04.318442 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:14:04.321282 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:14:04.321403 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:14:04.323424 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:14:04.325073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:14:04.329855 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:14:04.331176 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:14:04.333352 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:14:04.334951 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:14:04.335050 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:14:04.336649 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:14:04.336750 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:14:04.338394 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:14:04.338510 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:14:04.340349 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:14:04.340456 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:14:04.352997 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:14:04.353950 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:14:04.354093 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:14:04.360033 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:14:04.360927 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:14:04.361067 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:14:04.366975 ignition[992]: INFO : Ignition 2.19.0 Jan 13 21:14:04.366975 ignition[992]: INFO : Stage: umount Jan 13 21:14:04.366975 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:14:04.366975 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:14:04.366975 ignition[992]: INFO : umount: umount passed Jan 13 21:14:04.366975 ignition[992]: INFO : Ignition finished successfully Jan 13 21:14:04.363438 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:14:04.363546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:14:04.377064 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:14:04.377158 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:14:04.379303 systemd[1]: Stopped target network.target - Network. Jan 13 21:14:04.381059 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:14:04.381127 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:14:04.383125 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:14:04.383172 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:14:04.384927 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:14:04.384969 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:14:04.386742 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:14:04.386789 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:14:04.392810 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:14:04.393890 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:14:04.395994 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:14:04.396125 systemd-networkd[761]: eth0: DHCPv6 lease lost Jan 13 21:14:04.396717 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:14:04.396814 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:14:04.400085 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:14:04.400175 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:14:04.402186 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:14:04.402286 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:14:04.406693 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:14:04.406733 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:14:04.418922 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:14:04.419825 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:14:04.419898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:14:04.422035 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:14:04.422083 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:14:04.423869 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:14:04.423927 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:14:04.425867 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:14:04.425911 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:14:04.428127 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:14:04.438570 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:14:04.438682 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:14:04.443436 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:14:04.443560 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:14:04.445339 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:14:04.445417 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:14:04.447466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:14:04.447522 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:14:04.448828 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:14:04.448862 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:14:04.450566 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:14:04.450611 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:14:04.453426 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:14:04.453469 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:14:04.456019 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:14:04.456064 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:14:04.459139 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:14:04.459187 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:14:04.474981 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:14:04.476065 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:14:04.476138 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:14:04.478388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:14:04.478436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:14:04.480833 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:14:04.480927 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:14:04.483079 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:14:04.485307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:14:04.496007 systemd[1]: Switching root. Jan 13 21:14:04.530411 systemd-journald[237]: Journal stopped Jan 13 21:14:05.245652 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 13 21:14:05.245712 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:14:05.245724 kernel: SELinux: policy capability open_perms=1 Jan 13 21:14:05.245734 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:14:05.245747 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:14:05.245757 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:14:05.245767 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:14:05.245777 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:14:05.245786 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:14:05.245842 kernel: audit: type=1403 audit(1736802844.677:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:14:05.245856 systemd[1]: Successfully loaded SELinux policy in 32.712ms. Jan 13 21:14:05.245875 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.915ms. Jan 13 21:14:05.245886 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:14:05.245900 systemd[1]: Detected virtualization kvm. Jan 13 21:14:05.245911 systemd[1]: Detected architecture arm64. Jan 13 21:14:05.245921 systemd[1]: Detected first boot. Jan 13 21:14:05.245932 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:14:05.245943 zram_generator::config[1038]: No configuration found. Jan 13 21:14:05.245959 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:14:05.245970 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:14:05.246051 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:14:05.246069 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:14:05.246095 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:14:05.246130 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:14:05.246143 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:14:05.246154 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:14:05.246166 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:14:05.246177 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:14:05.246188 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:14:05.246198 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:14:05.246211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:14:05.246223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:14:05.246234 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:14:05.246244 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:14:05.246255 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:14:05.246266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:14:05.246277 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 21:14:05.246288 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:14:05.246298 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:14:05.246310 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:14:05.246321 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:14:05.246331 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:14:05.246342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:14:05.246352 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:14:05.246363 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:14:05.246374 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:14:05.246384 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:14:05.246397 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:14:05.246407 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:14:05.246418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:14:05.246428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:14:05.246439 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:14:05.246449 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:14:05.246460 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:14:05.246470 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:14:05.246481 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:14:05.246493 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:14:05.246504 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:14:05.246515 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:14:05.246527 systemd[1]: Reached target machines.target - Containers. Jan 13 21:14:05.246537 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:14:05.246548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:14:05.246559 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:14:05.246569 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:14:05.246581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:14:05.246592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:14:05.246602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:14:05.246613 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:14:05.246624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:14:05.246642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:14:05.246654 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:14:05.246665 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:14:05.246677 kernel: fuse: init (API version 7.39) Jan 13 21:14:05.246688 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:14:05.246698 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:14:05.246708 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:14:05.246718 kernel: ACPI: bus type drm_connector registered Jan 13 21:14:05.246728 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:14:05.246739 kernel: loop: module loaded Jan 13 21:14:05.246749 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:14:05.246759 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:14:05.246770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:14:05.246782 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:14:05.246793 systemd[1]: Stopped verity-setup.service. Jan 13 21:14:05.246835 systemd-journald[1109]: Collecting audit messages is disabled. Jan 13 21:14:05.246862 systemd-journald[1109]: Journal started Jan 13 21:14:05.246883 systemd-journald[1109]: Runtime Journal (/run/log/journal/38df085c6949496aaa5322d717fb6e7d) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:14:05.040935 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:14:05.061725 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:14:05.062093 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:14:05.247815 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:14:05.250824 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:14:05.251375 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:14:05.252655 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:14:05.253829 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:14:05.255027 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:14:05.256242 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:14:05.258822 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:14:05.260207 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:14:05.263125 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:14:05.263273 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:14:05.264716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:14:05.264993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:14:05.266355 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:14:05.266500 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:14:05.267866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:14:05.267999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:14:05.269426 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:14:05.269558 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:14:05.270973 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:14:05.271102 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:14:05.272514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:14:05.274119 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:14:05.275671 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:14:05.289252 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:14:05.296903 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:14:05.299004 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:14:05.300190 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:14:05.300233 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:14:05.302308 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:14:05.304618 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:14:05.306751 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:14:05.307914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:14:05.309428 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:14:05.311379 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:14:05.312589 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:14:05.316974 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:14:05.318424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:14:05.319982 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:14:05.320766 systemd-journald[1109]: Time spent on flushing to /var/log/journal/38df085c6949496aaa5322d717fb6e7d is 19.698ms for 854 entries. Jan 13 21:14:05.320766 systemd-journald[1109]: System Journal (/var/log/journal/38df085c6949496aaa5322d717fb6e7d) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:14:05.364235 systemd-journald[1109]: Received client request to flush runtime journal. Jan 13 21:14:05.364305 kernel: loop0: detected capacity change from 0 to 114328 Jan 13 21:14:05.325987 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:14:05.331788 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:14:05.334654 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:14:05.336225 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:14:05.339649 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:14:05.343256 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:14:05.345298 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:14:05.354537 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:14:05.365994 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:14:05.370988 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:14:05.375415 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:14:05.386791 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:14:05.384112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:14:05.392333 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:14:05.401513 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:14:05.402959 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:14:05.409970 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:14:05.418861 kernel: loop1: detected capacity change from 0 to 114432 Jan 13 21:14:05.419945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:14:05.444577 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 13 21:14:05.444599 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 13 21:14:05.448834 kernel: loop2: detected capacity change from 0 to 189592 Jan 13 21:14:05.450522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:14:05.482823 kernel: loop3: detected capacity change from 0 to 114328 Jan 13 21:14:05.489862 kernel: loop4: detected capacity change from 0 to 114432 Jan 13 21:14:05.494817 kernel: loop5: detected capacity change from 0 to 189592 Jan 13 21:14:05.499641 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:14:05.500135 (sd-merge)[1173]: Merged extensions into '/usr'. Jan 13 21:14:05.503650 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:14:05.503664 systemd[1]: Reloading... Jan 13 21:14:05.570868 zram_generator::config[1199]: No configuration found. Jan 13 21:14:05.630825 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:14:05.668659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:14:05.704495 systemd[1]: Reloading finished in 200 ms. Jan 13 21:14:05.736022 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:14:05.737497 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:14:05.752971 systemd[1]: Starting ensure-sysext.service... Jan 13 21:14:05.754998 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:14:05.761116 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:14:05.761132 systemd[1]: Reloading... Jan 13 21:14:05.772033 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:14:05.772595 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:14:05.773457 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:14:05.773786 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 13 21:14:05.773927 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 13 21:14:05.776201 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:14:05.776306 systemd-tmpfiles[1235]: Skipping /boot Jan 13 21:14:05.783273 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:14:05.783365 systemd-tmpfiles[1235]: Skipping /boot Jan 13 21:14:05.807828 zram_generator::config[1266]: No configuration found. Jan 13 21:14:05.884564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:14:05.919959 systemd[1]: Reloading finished in 158 ms. Jan 13 21:14:05.934272 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:14:05.952258 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:14:05.960167 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:14:05.962818 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:14:05.965337 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:14:05.969208 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:14:05.975097 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:14:05.980062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:14:05.983736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:14:05.986184 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:14:05.993586 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:14:05.997833 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:14:05.998919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:14:06.000823 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:14:06.002476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:14:06.002723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:14:06.004610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:14:06.004758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:14:06.007342 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:14:06.008929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:14:06.017836 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:14:06.021180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:14:06.022953 systemd-udevd[1304]: Using default interface naming scheme 'v255'. Jan 13 21:14:06.029092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:14:06.031524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:14:06.034168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:14:06.035581 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:14:06.039103 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:14:06.043990 augenrules[1330]: No rules Jan 13 21:14:06.044881 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:14:06.045930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:14:06.047204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:14:06.050886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:14:06.051022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:14:06.052630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:14:06.052756 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:14:06.054500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:14:06.054648 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:14:06.059542 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:14:06.063546 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:14:06.066024 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:14:06.080979 systemd[1]: Finished ensure-sysext.service. Jan 13 21:14:06.084953 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 21:14:06.086563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:14:06.096788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:14:06.099829 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1358) Jan 13 21:14:06.103792 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:14:06.106934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:14:06.113518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:14:06.114759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:14:06.118003 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:14:06.122995 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:14:06.124235 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:14:06.124554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:14:06.126848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:14:06.128843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:14:06.130316 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:14:06.130448 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:14:06.132012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:14:06.133826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:14:06.135281 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:14:06.135437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:14:06.152245 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:14:06.152312 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:14:06.183623 systemd-resolved[1303]: Positive Trust Anchors: Jan 13 21:14:06.183643 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:14:06.183676 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:14:06.187048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:14:06.198768 systemd-resolved[1303]: Defaulting to hostname 'linux'. Jan 13 21:14:06.200978 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:14:06.202309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:14:06.204252 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:14:06.213158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:14:06.215643 systemd-networkd[1376]: lo: Link UP Jan 13 21:14:06.215654 systemd-networkd[1376]: lo: Gained carrier Jan 13 21:14:06.216343 systemd-networkd[1376]: Enumeration completed Jan 13 21:14:06.216660 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:14:06.218690 systemd[1]: Reached target network.target - Network. Jan 13 21:14:06.220034 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:14:06.220037 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:14:06.225009 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:14:06.228818 systemd-networkd[1376]: eth0: Link UP Jan 13 21:14:06.228828 systemd-networkd[1376]: eth0: Gained carrier Jan 13 21:14:06.228844 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:14:06.230516 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:14:06.232191 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:14:06.233785 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:14:06.235669 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:14:06.238275 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:14:06.248891 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:14:06.249840 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Jan 13 21:14:06.670646 systemd-resolved[1303]: Clock change detected. Flushing caches. Jan 13 21:14:06.670802 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:14:06.670974 systemd-timesyncd[1377]: Initial clock synchronization to Mon 2025-01-13 21:14:06.670572 UTC. Jan 13 21:14:06.672824 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:14:06.686582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:14:06.700464 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:14:06.701859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:14:06.703021 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:14:06.704212 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:14:06.705484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:14:06.706881 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:14:06.708023 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:14:06.709351 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:14:06.710551 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:14:06.710584 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:14:06.711474 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:14:06.713116 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:14:06.715409 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:14:06.723471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:14:06.725898 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:14:06.727483 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:14:06.728737 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:14:06.729636 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:14:06.730606 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:14:06.730629 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:14:06.731501 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:14:06.733532 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:14:06.736854 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:14:06.736936 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:14:06.741916 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:14:06.742950 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:14:06.746737 jq[1407]: false Jan 13 21:14:06.747144 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:14:06.750593 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:14:06.753926 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:14:06.757601 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:14:06.764246 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:14:06.768139 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:14:06.768860 extend-filesystems[1408]: Found loop3 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found loop4 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found loop5 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found vda Jan 13 21:14:06.768860 extend-filesystems[1408]: Found vda1 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found vda2 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found vda3 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found usr Jan 13 21:14:06.768860 extend-filesystems[1408]: Found vda4 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found vda6 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found vda7 Jan 13 21:14:06.768860 extend-filesystems[1408]: Found vda9 Jan 13 21:14:06.768860 extend-filesystems[1408]: Checking size of /dev/vda9 Jan 13 21:14:06.768659 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:14:06.769936 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:14:06.774537 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:14:06.779714 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:14:06.788248 jq[1424]: true Jan 13 21:14:06.783433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:14:06.783601 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:14:06.783889 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:14:06.784022 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:14:06.790161 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:14:06.790797 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:14:06.796846 extend-filesystems[1408]: Resized partition /dev/vda9 Jan 13 21:14:06.800968 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:14:06.808875 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:14:06.797466 dbus-daemon[1406]: [system] SELinux support is enabled Jan 13 21:14:06.797688 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:14:06.817724 jq[1431]: true Jan 13 21:14:06.817655 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:14:06.821728 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1336) Jan 13 21:14:06.825961 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:14:06.825994 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:14:06.828829 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:14:06.828856 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:14:06.833715 tar[1428]: linux-arm64/helm Jan 13 21:14:06.842724 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:14:06.853277 update_engine[1423]: I20250113 21:14:06.852913 1423 main.cc:92] Flatcar Update Engine starting Jan 13 21:14:06.863818 update_engine[1423]: I20250113 21:14:06.859159 1423 update_check_scheduler.cc:74] Next update check in 7m47s Jan 13 21:14:06.859121 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:14:06.863400 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:14:06.864895 systemd-logind[1418]: New seat seat0. Jan 13 21:14:06.870972 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:14:06.870972 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:14:06.870972 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:14:06.870879 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:14:06.877635 extend-filesystems[1408]: Resized filesystem in /dev/vda9 Jan 13 21:14:06.872632 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:14:06.875683 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:14:06.878740 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:14:06.909016 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:14:06.911743 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:14:06.913529 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:14:06.942837 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:14:07.021147 containerd[1434]: time="2025-01-13T21:14:07.021060713Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:14:07.049445 containerd[1434]: time="2025-01-13T21:14:07.049360873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:14:07.051130 containerd[1434]: time="2025-01-13T21:14:07.050900993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:14:07.051130 containerd[1434]: time="2025-01-13T21:14:07.050984433Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:14:07.051130 containerd[1434]: time="2025-01-13T21:14:07.051002833Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:14:07.051457 containerd[1434]: time="2025-01-13T21:14:07.051378313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:14:07.051613 containerd[1434]: time="2025-01-13T21:14:07.051406073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:14:07.051932 containerd[1434]: time="2025-01-13T21:14:07.051718233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:14:07.051932 containerd[1434]: time="2025-01-13T21:14:07.051804793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:14:07.052258 containerd[1434]: time="2025-01-13T21:14:07.052233353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.052314713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.052389633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.052406953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.052541033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.052773953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.052892473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.052907073Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.052980793Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:14:07.052328 containerd[1434]: time="2025-01-13T21:14:07.053017233Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:14:07.057305 containerd[1434]: time="2025-01-13T21:14:07.057277473Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:14:07.057481 containerd[1434]: time="2025-01-13T21:14:07.057462553Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:14:07.057711 containerd[1434]: time="2025-01-13T21:14:07.057639673Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:14:07.057802 containerd[1434]: time="2025-01-13T21:14:07.057786193Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:14:07.057912 containerd[1434]: time="2025-01-13T21:14:07.057895273Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:14:07.058187 containerd[1434]: time="2025-01-13T21:14:07.058119273Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058544473Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058661593Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058678233Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058690433Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058717073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058730033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058741473Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058754473Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058812353Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058826713Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058838313Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058849833Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058868353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059307 containerd[1434]: time="2025-01-13T21:14:07.058883393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058901593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058916673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058927833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058939593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058950073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058961993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058973713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058986673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.058997953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.059008993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.059020833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.059035633Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.059055473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.059066953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.059585 containerd[1434]: time="2025-01-13T21:14:07.059076873Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:14:07.059855 containerd[1434]: time="2025-01-13T21:14:07.059185913Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:14:07.059855 containerd[1434]: time="2025-01-13T21:14:07.059203913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:14:07.059855 containerd[1434]: time="2025-01-13T21:14:07.059213793Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:14:07.059855 containerd[1434]: time="2025-01-13T21:14:07.059226473Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:14:07.059855 containerd[1434]: time="2025-01-13T21:14:07.059235273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.060249 containerd[1434]: time="2025-01-13T21:14:07.059246313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:14:07.060249 containerd[1434]: time="2025-01-13T21:14:07.060073553Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:14:07.060249 containerd[1434]: time="2025-01-13T21:14:07.060092713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:14:07.060854 containerd[1434]: time="2025-01-13T21:14:07.060742353Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:14:07.061196 containerd[1434]: time="2025-01-13T21:14:07.061061153Z" level=info msg="Connect containerd service" Jan 13 21:14:07.061196 containerd[1434]: time="2025-01-13T21:14:07.061105233Z" level=info msg="using legacy CRI server" Jan 13 21:14:07.061196 containerd[1434]: time="2025-01-13T21:14:07.061112433Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:14:07.061850 containerd[1434]: time="2025-01-13T21:14:07.061820433Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:14:07.063692 containerd[1434]: time="2025-01-13T21:14:07.063654913Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:14:07.064565 containerd[1434]: time="2025-01-13T21:14:07.064524073Z" level=info msg="Start subscribing containerd event" Jan 13 21:14:07.064870 containerd[1434]: time="2025-01-13T21:14:07.064848833Z" level=info msg="Start recovering state" Jan 13 21:14:07.065380 containerd[1434]: time="2025-01-13T21:14:07.065355073Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:14:07.065753 containerd[1434]: time="2025-01-13T21:14:07.065666513Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:14:07.065902 containerd[1434]: time="2025-01-13T21:14:07.065551753Z" level=info msg="Start event monitor" Jan 13 21:14:07.066271 containerd[1434]: time="2025-01-13T21:14:07.066251353Z" level=info msg="Start snapshots syncer" Jan 13 21:14:07.066455 containerd[1434]: time="2025-01-13T21:14:07.066339113Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:14:07.066455 containerd[1434]: time="2025-01-13T21:14:07.066353033Z" level=info msg="Start streaming server" Jan 13 21:14:07.067727 containerd[1434]: time="2025-01-13T21:14:07.066846793Z" level=info msg="containerd successfully booted in 0.046595s" Jan 13 21:14:07.066929 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:14:07.208296 tar[1428]: linux-arm64/LICENSE Jan 13 21:14:07.208387 tar[1428]: linux-arm64/README.md Jan 13 21:14:07.226736 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:14:08.396819 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 13 21:14:08.403377 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:14:08.405178 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:14:08.411358 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:14:08.415503 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:14:08.418162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:14:08.420271 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:14:08.433661 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:14:08.437844 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:14:08.439851 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:14:08.442266 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:14:08.442410 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:14:08.444670 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:14:08.446444 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:14:08.446652 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:14:08.449342 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:14:08.461798 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:14:08.464544 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:14:08.466641 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 21:14:08.468046 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:14:08.908089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:14:08.909603 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:14:08.911544 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:14:08.915183 systemd[1]: Startup finished in 577ms (kernel) + 4.947s (initrd) + 3.856s (userspace) = 9.381s. Jan 13 21:14:09.334601 kubelet[1520]: E0113 21:14:09.334492 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:14:09.337122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:14:09.337272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:14:12.891286 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:14:12.892395 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:39236.service - OpenSSH per-connection server daemon (10.0.0.1:39236). Jan 13 21:14:12.942745 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 39236 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:14:12.944146 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:12.953818 systemd-logind[1418]: New session 1 of user core. Jan 13 21:14:12.954655 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:14:12.964003 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:14:12.972118 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:14:12.974129 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:14:12.979858 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:14:13.064180 systemd[1537]: Queued start job for default target default.target. Jan 13 21:14:13.072653 systemd[1537]: Created slice app.slice - User Application Slice. Jan 13 21:14:13.072687 systemd[1537]: Reached target paths.target - Paths. Jan 13 21:14:13.072724 systemd[1537]: Reached target timers.target - Timers. Jan 13 21:14:13.073834 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:14:13.082447 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:14:13.082505 systemd[1537]: Reached target sockets.target - Sockets. Jan 13 21:14:13.082517 systemd[1537]: Reached target basic.target - Basic System. Jan 13 21:14:13.082551 systemd[1537]: Reached target default.target - Main User Target. Jan 13 21:14:13.082576 systemd[1537]: Startup finished in 98ms. Jan 13 21:14:13.082803 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:14:13.092862 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:14:13.150661 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:39238.service - OpenSSH per-connection server daemon (10.0.0.1:39238). Jan 13 21:14:13.187643 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 39238 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:14:13.188959 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:13.194008 systemd-logind[1418]: New session 2 of user core. Jan 13 21:14:13.202884 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:14:13.254689 sshd[1548]: pam_unix(sshd:session): session closed for user core Jan 13 21:14:13.268071 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:39238.service: Deactivated successfully. Jan 13 21:14:13.269507 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:14:13.270713 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:14:13.281938 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:39252.service - OpenSSH per-connection server daemon (10.0.0.1:39252). Jan 13 21:14:13.282728 systemd-logind[1418]: Removed session 2. Jan 13 21:14:13.317744 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 39252 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:14:13.318525 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:13.322871 systemd-logind[1418]: New session 3 of user core. Jan 13 21:14:13.340894 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:14:13.389610 sshd[1555]: pam_unix(sshd:session): session closed for user core Jan 13 21:14:13.399268 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:39252.service: Deactivated successfully. Jan 13 21:14:13.402165 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:14:13.403429 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:14:13.404856 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:39258.service - OpenSSH per-connection server daemon (10.0.0.1:39258). Jan 13 21:14:13.405639 systemd-logind[1418]: Removed session 3. Jan 13 21:14:13.444882 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 39258 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:14:13.446190 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:13.450437 systemd-logind[1418]: New session 4 of user core. Jan 13 21:14:13.464860 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:14:13.517104 sshd[1562]: pam_unix(sshd:session): session closed for user core Jan 13 21:14:13.525937 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:39258.service: Deactivated successfully. Jan 13 21:14:13.527275 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:14:13.529805 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:14:13.530868 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:39270.service - OpenSSH per-connection server daemon (10.0.0.1:39270). Jan 13 21:14:13.531585 systemd-logind[1418]: Removed session 4. Jan 13 21:14:13.567776 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 39270 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:14:13.568978 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:13.572743 systemd-logind[1418]: New session 5 of user core. Jan 13 21:14:13.584835 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:14:13.644323 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:14:13.644600 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:14:13.658460 sudo[1572]: pam_unix(sudo:session): session closed for user root Jan 13 21:14:13.660095 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 13 21:14:13.669124 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:39270.service: Deactivated successfully. Jan 13 21:14:13.670471 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:14:13.671809 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:14:13.674942 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:39276.service - OpenSSH per-connection server daemon (10.0.0.1:39276). Jan 13 21:14:13.675661 systemd-logind[1418]: Removed session 5. Jan 13 21:14:13.710788 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 39276 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:14:13.712111 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:13.716754 systemd-logind[1418]: New session 6 of user core. Jan 13 21:14:13.722854 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:14:13.773104 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:14:13.773389 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:14:13.776416 sudo[1581]: pam_unix(sudo:session): session closed for user root Jan 13 21:14:13.780993 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:14:13.781244 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:14:13.797000 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:14:13.798120 auditctl[1584]: No rules Jan 13 21:14:13.798916 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:14:13.799118 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:14:13.800861 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:14:13.823724 augenrules[1602]: No rules Jan 13 21:14:13.825793 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:14:13.827180 sudo[1580]: pam_unix(sudo:session): session closed for user root Jan 13 21:14:13.828786 sshd[1577]: pam_unix(sshd:session): session closed for user core Jan 13 21:14:13.844131 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:39276.service: Deactivated successfully. Jan 13 21:14:13.845593 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:14:13.848006 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:14:13.849160 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:39282.service - OpenSSH per-connection server daemon (10.0.0.1:39282). Jan 13 21:14:13.850056 systemd-logind[1418]: Removed session 6. Jan 13 21:14:13.886990 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 39282 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:14:13.888260 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:13.892153 systemd-logind[1418]: New session 7 of user core. Jan 13 21:14:13.898903 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:14:13.949798 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:14:13.950084 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:14:14.284005 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:14:14.284064 (dockerd)[1632]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:14:14.549808 dockerd[1632]: time="2025-01-13T21:14:14.549459673Z" level=info msg="Starting up" Jan 13 21:14:14.694478 dockerd[1632]: time="2025-01-13T21:14:14.694425033Z" level=info msg="Loading containers: start." Jan 13 21:14:14.776835 kernel: Initializing XFRM netlink socket Jan 13 21:14:14.840683 systemd-networkd[1376]: docker0: Link UP Jan 13 21:14:14.859918 dockerd[1632]: time="2025-01-13T21:14:14.859862673Z" level=info msg="Loading containers: done." Jan 13 21:14:14.870376 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2950461677-merged.mount: Deactivated successfully. Jan 13 21:14:14.873234 dockerd[1632]: time="2025-01-13T21:14:14.873182473Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:14:14.873316 dockerd[1632]: time="2025-01-13T21:14:14.873283033Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:14:14.873431 dockerd[1632]: time="2025-01-13T21:14:14.873402753Z" level=info msg="Daemon has completed initialization" Jan 13 21:14:14.906991 dockerd[1632]: time="2025-01-13T21:14:14.906847513Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:14:14.907091 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:14:15.540259 containerd[1434]: time="2025-01-13T21:14:15.540206433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:14:16.365604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1220576344.mount: Deactivated successfully. Jan 13 21:14:18.176088 containerd[1434]: time="2025-01-13T21:14:18.175979633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:18.177486 containerd[1434]: time="2025-01-13T21:14:18.177448273Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615587" Jan 13 21:14:18.179343 containerd[1434]: time="2025-01-13T21:14:18.179304313Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:18.182111 containerd[1434]: time="2025-01-13T21:14:18.182043233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:18.183383 containerd[1434]: time="2025-01-13T21:14:18.183354313Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.64309956s" Jan 13 21:14:18.183657 containerd[1434]: time="2025-01-13T21:14:18.183458553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 21:14:18.184155 containerd[1434]: time="2025-01-13T21:14:18.184125953Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:14:19.587617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:14:19.598879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:14:19.697284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:14:19.700975 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:14:19.746473 kubelet[1841]: E0113 21:14:19.746418 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:14:19.749314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:14:19.749457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:14:20.644783 containerd[1434]: time="2025-01-13T21:14:20.644555353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:20.645599 containerd[1434]: time="2025-01-13T21:14:20.645571513Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470098" Jan 13 21:14:20.646307 containerd[1434]: time="2025-01-13T21:14:20.646278393Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:20.650811 containerd[1434]: time="2025-01-13T21:14:20.650767833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:20.651510 containerd[1434]: time="2025-01-13T21:14:20.651471593Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 2.4673112s" Jan 13 21:14:20.651510 containerd[1434]: time="2025-01-13T21:14:20.651506633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 21:14:20.652712 containerd[1434]: time="2025-01-13T21:14:20.652676793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:14:22.663246 containerd[1434]: time="2025-01-13T21:14:22.663200793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:22.664116 containerd[1434]: time="2025-01-13T21:14:22.663849073Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024204" Jan 13 21:14:22.664790 containerd[1434]: time="2025-01-13T21:14:22.664762473Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:22.668117 containerd[1434]: time="2025-01-13T21:14:22.668079873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:22.669647 containerd[1434]: time="2025-01-13T21:14:22.669252913Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 2.01640368s" Jan 13 21:14:22.669647 containerd[1434]: time="2025-01-13T21:14:22.669288593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 21:14:22.669763 containerd[1434]: time="2025-01-13T21:14:22.669675993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:14:23.812917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172678446.mount: Deactivated successfully. Jan 13 21:14:24.575308 containerd[1434]: time="2025-01-13T21:14:24.575259073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:24.576124 containerd[1434]: time="2025-01-13T21:14:24.576081033Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Jan 13 21:14:24.577020 containerd[1434]: time="2025-01-13T21:14:24.576787233Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:24.579211 containerd[1434]: time="2025-01-13T21:14:24.579181393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:24.579781 containerd[1434]: time="2025-01-13T21:14:24.579750633Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.91004748s" Jan 13 21:14:24.579840 containerd[1434]: time="2025-01-13T21:14:24.579784313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 21:14:24.580275 containerd[1434]: time="2025-01-13T21:14:24.580198513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:14:25.192140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580104260.mount: Deactivated successfully. Jan 13 21:14:26.425744 containerd[1434]: time="2025-01-13T21:14:26.425615873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:26.426238 containerd[1434]: time="2025-01-13T21:14:26.426195673Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 21:14:26.427289 containerd[1434]: time="2025-01-13T21:14:26.427258833Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:26.430764 containerd[1434]: time="2025-01-13T21:14:26.430705953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:26.435057 containerd[1434]: time="2025-01-13T21:14:26.432433833Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.85218584s" Jan 13 21:14:26.435057 containerd[1434]: time="2025-01-13T21:14:26.434839713Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:14:26.435967 containerd[1434]: time="2025-01-13T21:14:26.435939473Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:14:27.160246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039570715.mount: Deactivated successfully. Jan 13 21:14:27.169746 containerd[1434]: time="2025-01-13T21:14:27.169244233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:27.171154 containerd[1434]: time="2025-01-13T21:14:27.171111593Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 13 21:14:27.172410 containerd[1434]: time="2025-01-13T21:14:27.172368313Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:27.175808 containerd[1434]: time="2025-01-13T21:14:27.175757713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:27.176840 containerd[1434]: time="2025-01-13T21:14:27.176572313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 740.60164ms" Jan 13 21:14:27.176840 containerd[1434]: time="2025-01-13T21:14:27.176608713Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 21:14:27.177148 containerd[1434]: time="2025-01-13T21:14:27.177025033Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:14:27.780121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470351617.mount: Deactivated successfully. Jan 13 21:14:29.999776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:14:30.009852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:14:30.104762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:14:30.108052 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:14:30.141262 kubelet[1969]: E0113 21:14:30.141179 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:14:30.143253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:14:30.143368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:14:31.239319 containerd[1434]: time="2025-01-13T21:14:31.239256193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:31.240363 containerd[1434]: time="2025-01-13T21:14:31.240324033Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 13 21:14:31.241067 containerd[1434]: time="2025-01-13T21:14:31.241022993Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:31.244391 containerd[1434]: time="2025-01-13T21:14:31.244331593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:31.246898 containerd[1434]: time="2025-01-13T21:14:31.246861433Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.06980348s" Jan 13 21:14:31.247220 containerd[1434]: time="2025-01-13T21:14:31.247005593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 21:14:37.566675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:14:37.580931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:14:37.606384 systemd[1]: Reloading requested from client PID 2013 ('systemctl') (unit session-7.scope)... Jan 13 21:14:37.606398 systemd[1]: Reloading... Jan 13 21:14:37.682743 zram_generator::config[2055]: No configuration found. Jan 13 21:14:37.783528 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:14:37.834675 systemd[1]: Reloading finished in 227 ms. Jan 13 21:14:37.880804 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:14:37.880878 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:14:37.881821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:14:37.883756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:14:37.977194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:14:37.981625 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:14:38.023122 kubelet[2098]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:14:38.023122 kubelet[2098]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:14:38.023122 kubelet[2098]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:14:38.023422 kubelet[2098]: I0113 21:14:38.023186 2098 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:14:39.033732 kubelet[2098]: I0113 21:14:39.032550 2098 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:14:39.033732 kubelet[2098]: I0113 21:14:39.032580 2098 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:14:39.033732 kubelet[2098]: I0113 21:14:39.032827 2098 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:14:39.070666 kubelet[2098]: E0113 21:14:39.070629 2098 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:39.074075 kubelet[2098]: I0113 21:14:39.074043 2098 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:14:39.083821 kubelet[2098]: E0113 21:14:39.083778 2098 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:14:39.083821 kubelet[2098]: I0113 21:14:39.083818 2098 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:14:39.089156 kubelet[2098]: I0113 21:14:39.089135 2098 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:14:39.089460 kubelet[2098]: I0113 21:14:39.089447 2098 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:14:39.089580 kubelet[2098]: I0113 21:14:39.089556 2098 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:14:39.089763 kubelet[2098]: I0113 21:14:39.089582 2098 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:14:39.089914 kubelet[2098]: I0113 21:14:39.089901 2098 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:14:39.089914 kubelet[2098]: I0113 21:14:39.089914 2098 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:14:39.090093 kubelet[2098]: I0113 21:14:39.090079 2098 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:14:39.091773 kubelet[2098]: I0113 21:14:39.091742 2098 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:14:39.091773 kubelet[2098]: I0113 21:14:39.091772 2098 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:14:39.091887 kubelet[2098]: I0113 21:14:39.091865 2098 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:14:39.091887 kubelet[2098]: I0113 21:14:39.091880 2098 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:14:39.094493 kubelet[2098]: I0113 21:14:39.093777 2098 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:14:39.095658 kubelet[2098]: I0113 21:14:39.095591 2098 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:14:39.096400 kubelet[2098]: W0113 21:14:39.096294 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 13 21:14:39.096400 kubelet[2098]: E0113 21:14:39.096355 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:39.096542 kubelet[2098]: W0113 21:14:39.096356 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 13 21:14:39.096542 kubelet[2098]: E0113 21:14:39.096521 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:39.096920 kubelet[2098]: W0113 21:14:39.096896 2098 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:14:39.097589 kubelet[2098]: I0113 21:14:39.097553 2098 server.go:1269] "Started kubelet" Jan 13 21:14:39.098911 kubelet[2098]: I0113 21:14:39.098365 2098 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:14:39.098911 kubelet[2098]: I0113 21:14:39.098576 2098 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:14:39.098911 kubelet[2098]: I0113 21:14:39.098876 2098 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:14:39.099737 kubelet[2098]: I0113 21:14:39.099716 2098 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:14:39.100105 kubelet[2098]: I0113 21:14:39.100083 2098 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:14:39.100790 kubelet[2098]: I0113 21:14:39.100491 2098 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:14:39.101877 kubelet[2098]: E0113 21:14:39.101798 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:39.101946 kubelet[2098]: I0113 21:14:39.101916 2098 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:14:39.102098 kubelet[2098]: I0113 21:14:39.102075 2098 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:14:39.102154 kubelet[2098]: I0113 21:14:39.102142 2098 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:14:39.102440 kubelet[2098]: W0113 21:14:39.102401 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 13 21:14:39.102475 kubelet[2098]: E0113 21:14:39.102448 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:39.102690 kubelet[2098]: I0113 21:14:39.102651 2098 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:14:39.102903 kubelet[2098]: E0113 21:14:39.102842 2098 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:14:39.104772 kubelet[2098]: E0113 21:14:39.102652 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Jan 13 21:14:39.104772 kubelet[2098]: I0113 21:14:39.104022 2098 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:14:39.104772 kubelet[2098]: I0113 21:14:39.104036 2098 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:14:39.105970 kubelet[2098]: E0113 21:14:39.105044 2098 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d043d7be799 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:14:39.097530265 +0000 UTC m=+1.112568033,LastTimestamp:2025-01-13 21:14:39.097530265 +0000 UTC m=+1.112568033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:14:39.114381 kubelet[2098]: I0113 21:14:39.114323 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:14:39.115242 kubelet[2098]: I0113 21:14:39.115217 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:14:39.115242 kubelet[2098]: I0113 21:14:39.115236 2098 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:14:39.115291 kubelet[2098]: I0113 21:14:39.115257 2098 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:14:39.115310 kubelet[2098]: E0113 21:14:39.115291 2098 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:14:39.116720 kubelet[2098]: W0113 21:14:39.116663 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 13 21:14:39.116786 kubelet[2098]: E0113 21:14:39.116728 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:39.118710 kubelet[2098]: I0113 21:14:39.118202 2098 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:14:39.118710 kubelet[2098]: I0113 21:14:39.118220 2098 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:14:39.118710 kubelet[2098]: I0113 21:14:39.118235 2098 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:14:39.182020 kubelet[2098]: I0113 21:14:39.181973 2098 policy_none.go:49] "None policy: Start" Jan 13 21:14:39.182756 kubelet[2098]: I0113 21:14:39.182692 2098 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:14:39.182822 kubelet[2098]: I0113 21:14:39.182767 2098 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:14:39.188293 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:14:39.201909 kubelet[2098]: E0113 21:14:39.201864 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:39.205012 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:14:39.207345 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:14:39.216272 kubelet[2098]: E0113 21:14:39.216232 2098 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:14:39.217549 kubelet[2098]: I0113 21:14:39.217531 2098 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:14:39.218433 kubelet[2098]: I0113 21:14:39.217764 2098 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:14:39.218433 kubelet[2098]: I0113 21:14:39.217778 2098 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:14:39.218433 kubelet[2098]: I0113 21:14:39.218173 2098 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:14:39.219391 kubelet[2098]: E0113 21:14:39.219362 2098 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:14:39.304845 kubelet[2098]: E0113 21:14:39.304750 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Jan 13 21:14:39.319717 kubelet[2098]: I0113 21:14:39.319623 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:14:39.319983 kubelet[2098]: E0113 21:14:39.319962 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 13 21:14:39.422573 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 21:14:39.444299 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 21:14:39.446820 systemd[1]: Created slice kubepods-burstable-pod49f33b161564998a529f6ce3388c49b4.slice - libcontainer container kubepods-burstable-pod49f33b161564998a529f6ce3388c49b4.slice. Jan 13 21:14:39.503421 kubelet[2098]: I0113 21:14:39.503391 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:39.503689 kubelet[2098]: I0113 21:14:39.503539 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:14:39.503689 kubelet[2098]: I0113 21:14:39.503564 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:39.503689 kubelet[2098]: I0113 21:14:39.503581 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:39.503689 kubelet[2098]: I0113 21:14:39.503598 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:39.503689 kubelet[2098]: I0113 21:14:39.503613 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:39.503855 kubelet[2098]: I0113 21:14:39.503628 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49f33b161564998a529f6ce3388c49b4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49f33b161564998a529f6ce3388c49b4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:14:39.503855 kubelet[2098]: I0113 21:14:39.503642 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49f33b161564998a529f6ce3388c49b4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49f33b161564998a529f6ce3388c49b4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:14:39.503855 kubelet[2098]: I0113 21:14:39.503656 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49f33b161564998a529f6ce3388c49b4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49f33b161564998a529f6ce3388c49b4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:14:39.521308 kubelet[2098]: I0113 21:14:39.521232 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:14:39.521546 kubelet[2098]: E0113 21:14:39.521517 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 13 21:14:39.705459 kubelet[2098]: E0113 21:14:39.705331 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Jan 13 21:14:39.742707 kubelet[2098]: E0113 21:14:39.742654 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:39.743413 containerd[1434]: time="2025-01-13T21:14:39.743247398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 21:14:39.746463 kubelet[2098]: E0113 21:14:39.746442 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:39.746889 containerd[1434]: time="2025-01-13T21:14:39.746862228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 21:14:39.749265 kubelet[2098]: E0113 21:14:39.749237 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:39.749542 containerd[1434]: time="2025-01-13T21:14:39.749517939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49f33b161564998a529f6ce3388c49b4,Namespace:kube-system,Attempt:0,}" Jan 13 21:14:39.923332 kubelet[2098]: I0113 21:14:39.923285 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:14:39.923640 kubelet[2098]: E0113 21:14:39.923592 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 13 21:14:39.939262 kubelet[2098]: W0113 21:14:39.939165 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 13 21:14:39.939262 kubelet[2098]: E0113 21:14:39.939232 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:40.283403 kubelet[2098]: W0113 21:14:40.283340 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 13 21:14:40.283753 kubelet[2098]: E0113 21:14:40.283410 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:40.356541 kubelet[2098]: W0113 21:14:40.356448 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 13 21:14:40.356541 kubelet[2098]: E0113 21:14:40.356509 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:40.373646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455363898.mount: Deactivated successfully. Jan 13 21:14:40.379236 containerd[1434]: time="2025-01-13T21:14:40.378361865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:14:40.379740 containerd[1434]: time="2025-01-13T21:14:40.379712878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:14:40.380444 containerd[1434]: time="2025-01-13T21:14:40.380413465Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:14:40.381334 containerd[1434]: time="2025-01-13T21:14:40.381298780Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:14:40.382446 containerd[1434]: time="2025-01-13T21:14:40.382418263Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:14:40.384209 containerd[1434]: time="2025-01-13T21:14:40.384175572Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:14:40.386420 containerd[1434]: time="2025-01-13T21:14:40.386384098Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 21:14:40.387874 containerd[1434]: time="2025-01-13T21:14:40.387840715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:14:40.389509 containerd[1434]: time="2025-01-13T21:14:40.389476819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 646.148258ms" Jan 13 21:14:40.391100 containerd[1434]: time="2025-01-13T21:14:40.390957437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 641.387216ms" Jan 13 21:14:40.393293 containerd[1434]: time="2025-01-13T21:14:40.393154603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 646.234692ms" Jan 13 21:14:40.510131 kubelet[2098]: E0113 21:14:40.506182 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Jan 13 21:14:40.525408 containerd[1434]: time="2025-01-13T21:14:40.525037472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:40.525408 containerd[1434]: time="2025-01-13T21:14:40.525112355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:40.525408 containerd[1434]: time="2025-01-13T21:14:40.525130476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:40.525408 containerd[1434]: time="2025-01-13T21:14:40.525209519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:40.525726 containerd[1434]: time="2025-01-13T21:14:40.525386166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:40.525726 containerd[1434]: time="2025-01-13T21:14:40.525455288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:40.525726 containerd[1434]: time="2025-01-13T21:14:40.525470329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:40.525726 containerd[1434]: time="2025-01-13T21:14:40.525557252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:40.527739 containerd[1434]: time="2025-01-13T21:14:40.526642615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:40.527739 containerd[1434]: time="2025-01-13T21:14:40.526717778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:40.527739 containerd[1434]: time="2025-01-13T21:14:40.526733818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:40.527739 containerd[1434]: time="2025-01-13T21:14:40.526804821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:40.541796 kubelet[2098]: W0113 21:14:40.541647 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 13 21:14:40.541796 kubelet[2098]: E0113 21:14:40.541756 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:14:40.550868 systemd[1]: Started cri-containerd-2ed5aa63c39b27884ddc50bb4c4ed9a7150c4db5078994d54bf09f0efd6fe968.scope - libcontainer container 2ed5aa63c39b27884ddc50bb4c4ed9a7150c4db5078994d54bf09f0efd6fe968. Jan 13 21:14:40.551971 systemd[1]: Started cri-containerd-4fadbd22579549389f5e78121009e212f7e3d55aadd255ecf90be3a7c313e0e6.scope - libcontainer container 4fadbd22579549389f5e78121009e212f7e3d55aadd255ecf90be3a7c313e0e6. Jan 13 21:14:40.552970 systemd[1]: Started cri-containerd-8f19b867374954cce359256eb377ae307d29e3709b94e11a3e4ed2a85dd25547.scope - libcontainer container 8f19b867374954cce359256eb377ae307d29e3709b94e11a3e4ed2a85dd25547. Jan 13 21:14:40.583417 containerd[1434]: time="2025-01-13T21:14:40.583380270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fadbd22579549389f5e78121009e212f7e3d55aadd255ecf90be3a7c313e0e6\"" Jan 13 21:14:40.585407 kubelet[2098]: E0113 21:14:40.585217 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:40.588830 containerd[1434]: time="2025-01-13T21:14:40.588792361Z" level=info msg="CreateContainer within sandbox \"4fadbd22579549389f5e78121009e212f7e3d55aadd255ecf90be3a7c313e0e6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:14:40.594289 containerd[1434]: time="2025-01-13T21:14:40.594261495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f19b867374954cce359256eb377ae307d29e3709b94e11a3e4ed2a85dd25547\"" Jan 13 21:14:40.595230 kubelet[2098]: E0113 21:14:40.595202 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:40.595865 containerd[1434]: time="2025-01-13T21:14:40.595818396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49f33b161564998a529f6ce3388c49b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ed5aa63c39b27884ddc50bb4c4ed9a7150c4db5078994d54bf09f0efd6fe968\"" Jan 13 21:14:40.596583 kubelet[2098]: E0113 21:14:40.596408 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:40.597707 containerd[1434]: time="2025-01-13T21:14:40.597672708Z" level=info msg="CreateContainer within sandbox \"8f19b867374954cce359256eb377ae307d29e3709b94e11a3e4ed2a85dd25547\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:14:40.598455 containerd[1434]: time="2025-01-13T21:14:40.598362575Z" level=info msg="CreateContainer within sandbox \"2ed5aa63c39b27884ddc50bb4c4ed9a7150c4db5078994d54bf09f0efd6fe968\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:14:40.609077 containerd[1434]: time="2025-01-13T21:14:40.609014911Z" level=info msg="CreateContainer within sandbox \"4fadbd22579549389f5e78121009e212f7e3d55aadd255ecf90be3a7c313e0e6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76a5b244da9311f901f6803cc7625053778e0dd1ffd0cb5f82acbb47dc056ea8\"" Jan 13 21:14:40.609679 containerd[1434]: time="2025-01-13T21:14:40.609651896Z" level=info msg="StartContainer for \"76a5b244da9311f901f6803cc7625053778e0dd1ffd0cb5f82acbb47dc056ea8\"" Jan 13 21:14:40.619244 containerd[1434]: time="2025-01-13T21:14:40.619138986Z" level=info msg="CreateContainer within sandbox \"2ed5aa63c39b27884ddc50bb4c4ed9a7150c4db5078994d54bf09f0efd6fe968\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"839fb94b23df0bb612399125f08683850f9c188569f86bf7a609225a0e538c90\"" Jan 13 21:14:40.619618 containerd[1434]: time="2025-01-13T21:14:40.619560082Z" level=info msg="StartContainer for \"839fb94b23df0bb612399125f08683850f9c188569f86bf7a609225a0e538c90\"" Jan 13 21:14:40.620186 containerd[1434]: time="2025-01-13T21:14:40.620144545Z" level=info msg="CreateContainer within sandbox \"8f19b867374954cce359256eb377ae307d29e3709b94e11a3e4ed2a85dd25547\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0f87ffa887a7f14af50ba45432f6bc0995fbd2d4d168c7ea9264a93c6700228e\"" Jan 13 21:14:40.620465 containerd[1434]: time="2025-01-13T21:14:40.620443437Z" level=info msg="StartContainer for \"0f87ffa887a7f14af50ba45432f6bc0995fbd2d4d168c7ea9264a93c6700228e\"" Jan 13 21:14:40.631875 systemd[1]: Started cri-containerd-76a5b244da9311f901f6803cc7625053778e0dd1ffd0cb5f82acbb47dc056ea8.scope - libcontainer container 76a5b244da9311f901f6803cc7625053778e0dd1ffd0cb5f82acbb47dc056ea8. Jan 13 21:14:40.649931 systemd[1]: Started cri-containerd-0f87ffa887a7f14af50ba45432f6bc0995fbd2d4d168c7ea9264a93c6700228e.scope - libcontainer container 0f87ffa887a7f14af50ba45432f6bc0995fbd2d4d168c7ea9264a93c6700228e. Jan 13 21:14:40.655426 systemd[1]: Started cri-containerd-839fb94b23df0bb612399125f08683850f9c188569f86bf7a609225a0e538c90.scope - libcontainer container 839fb94b23df0bb612399125f08683850f9c188569f86bf7a609225a0e538c90. Jan 13 21:14:40.678269 containerd[1434]: time="2025-01-13T21:14:40.678159610Z" level=info msg="StartContainer for \"76a5b244da9311f901f6803cc7625053778e0dd1ffd0cb5f82acbb47dc056ea8\" returns successfully" Jan 13 21:14:40.704312 containerd[1434]: time="2025-01-13T21:14:40.704220228Z" level=info msg="StartContainer for \"0f87ffa887a7f14af50ba45432f6bc0995fbd2d4d168c7ea9264a93c6700228e\" returns successfully" Jan 13 21:14:40.725741 containerd[1434]: time="2025-01-13T21:14:40.725474138Z" level=info msg="StartContainer for \"839fb94b23df0bb612399125f08683850f9c188569f86bf7a609225a0e538c90\" returns successfully" Jan 13 21:14:40.725828 kubelet[2098]: I0113 21:14:40.725357 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:14:40.732231 kubelet[2098]: E0113 21:14:40.730113 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 13 21:14:41.124855 kubelet[2098]: E0113 21:14:41.124815 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:41.127532 kubelet[2098]: E0113 21:14:41.127469 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:41.128770 kubelet[2098]: E0113 21:14:41.128747 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:42.132528 kubelet[2098]: E0113 21:14:42.132482 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:42.133061 kubelet[2098]: E0113 21:14:42.133038 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:42.297438 kubelet[2098]: E0113 21:14:42.297306 2098 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:14:42.332209 kubelet[2098]: I0113 21:14:42.332007 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:14:42.336992 kubelet[2098]: E0113 21:14:42.336858 2098 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181a5d043d7be799 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:14:39.097530265 +0000 UTC m=+1.112568033,LastTimestamp:2025-01-13 21:14:39.097530265 +0000 UTC m=+1.112568033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:14:42.347326 kubelet[2098]: I0113 21:14:42.347281 2098 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:14:42.347326 kubelet[2098]: E0113 21:14:42.347321 2098 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 21:14:42.356733 kubelet[2098]: E0113 21:14:42.356683 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:42.392757 kubelet[2098]: E0113 21:14:42.391504 2098 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181a5d043dccd93e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:14:39.102835006 +0000 UTC m=+1.117872774,LastTimestamp:2025-01-13 21:14:39.102835006 +0000 UTC m=+1.117872774,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:14:42.446593 kubelet[2098]: E0113 21:14:42.446228 2098 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181a5d043ea6da71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:14:39.117122161 +0000 UTC m=+1.132159929,LastTimestamp:2025-01-13 21:14:39.117122161 +0000 UTC m=+1.132159929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:14:42.457372 kubelet[2098]: E0113 21:14:42.457328 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:42.500527 kubelet[2098]: E0113 21:14:42.499490 2098 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181a5d043ea6f11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:14:39.117127962 +0000 UTC m=+1.132165730,LastTimestamp:2025-01-13 21:14:39.117127962 +0000 UTC m=+1.132165730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:14:42.558193 kubelet[2098]: E0113 21:14:42.558128 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:42.659309 kubelet[2098]: E0113 21:14:42.658878 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:42.759728 kubelet[2098]: E0113 21:14:42.759679 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:42.860046 kubelet[2098]: E0113 21:14:42.859996 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:42.960866 kubelet[2098]: E0113 21:14:42.960493 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:43.061658 kubelet[2098]: E0113 21:14:43.061612 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:43.093240 kubelet[2098]: I0113 21:14:43.093203 2098 apiserver.go:52] "Watching apiserver" Jan 13 21:14:43.102519 kubelet[2098]: I0113 21:14:43.102471 2098 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:14:44.446572 systemd[1]: Reloading requested from client PID 2373 ('systemctl') (unit session-7.scope)... Jan 13 21:14:44.446587 systemd[1]: Reloading... Jan 13 21:14:44.511734 zram_generator::config[2415]: No configuration found. Jan 13 21:14:44.601300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:14:44.665116 systemd[1]: Reloading finished in 218 ms. Jan 13 21:14:44.698157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:14:44.713249 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:14:44.713478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:14:44.713525 systemd[1]: kubelet.service: Consumed 1.467s CPU time, 119.6M memory peak, 0B memory swap peak. Jan 13 21:14:44.725062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:14:44.815207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:14:44.819694 (kubelet)[2454]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:14:44.862922 kubelet[2454]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:14:44.862922 kubelet[2454]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:14:44.862922 kubelet[2454]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:14:44.863261 kubelet[2454]: I0113 21:14:44.863009 2454 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:14:44.869935 kubelet[2454]: I0113 21:14:44.869884 2454 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:14:44.869935 kubelet[2454]: I0113 21:14:44.869928 2454 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:14:44.870225 kubelet[2454]: I0113 21:14:44.870205 2454 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:14:44.871595 kubelet[2454]: I0113 21:14:44.871563 2454 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:14:44.873828 kubelet[2454]: I0113 21:14:44.873768 2454 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:14:44.877002 kubelet[2454]: E0113 21:14:44.876879 2454 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:14:44.877002 kubelet[2454]: I0113 21:14:44.876950 2454 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:14:44.879945 kubelet[2454]: I0113 21:14:44.879140 2454 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:14:44.879945 kubelet[2454]: I0113 21:14:44.879275 2454 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:14:44.879945 kubelet[2454]: I0113 21:14:44.879374 2454 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:14:44.879945 kubelet[2454]: I0113 21:14:44.879398 2454 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:14:44.880134 kubelet[2454]: I0113 21:14:44.879578 2454 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:14:44.880134 kubelet[2454]: I0113 21:14:44.879587 2454 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:14:44.880134 kubelet[2454]: I0113 21:14:44.879616 2454 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:14:44.880134 kubelet[2454]: I0113 21:14:44.879744 2454 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:14:44.880134 kubelet[2454]: I0113 21:14:44.879758 2454 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:14:44.880134 kubelet[2454]: I0113 21:14:44.879779 2454 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:14:44.880134 kubelet[2454]: I0113 21:14:44.879790 2454 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:14:44.880659 kubelet[2454]: I0113 21:14:44.880399 2454 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:14:44.881055 kubelet[2454]: I0113 21:14:44.881037 2454 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:14:44.884726 kubelet[2454]: I0113 21:14:44.882657 2454 server.go:1269] "Started kubelet" Jan 13 21:14:44.884726 kubelet[2454]: I0113 21:14:44.883349 2454 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:14:44.884726 kubelet[2454]: I0113 21:14:44.883547 2454 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:14:44.884726 kubelet[2454]: I0113 21:14:44.884173 2454 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:14:44.885295 kubelet[2454]: I0113 21:14:44.885273 2454 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:14:44.887866 kubelet[2454]: I0113 21:14:44.885591 2454 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:14:44.899215 kubelet[2454]: I0113 21:14:44.899169 2454 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:14:44.899423 kubelet[2454]: I0113 21:14:44.885551 2454 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:14:44.900182 kubelet[2454]: I0113 21:14:44.900125 2454 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:14:44.900360 kubelet[2454]: E0113 21:14:44.886277 2454 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:14:44.900506 kubelet[2454]: I0113 21:14:44.886086 2454 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:14:44.900779 kubelet[2454]: I0113 21:14:44.886108 2454 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:14:44.908465 kubelet[2454]: I0113 21:14:44.908041 2454 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:14:44.912164 kubelet[2454]: I0113 21:14:44.912136 2454 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:14:44.915734 kubelet[2454]: I0113 21:14:44.915683 2454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:14:44.916781 kubelet[2454]: I0113 21:14:44.916758 2454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:14:44.916881 kubelet[2454]: I0113 21:14:44.916867 2454 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:14:44.916954 kubelet[2454]: I0113 21:14:44.916944 2454 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:14:44.917051 kubelet[2454]: E0113 21:14:44.917032 2454 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:14:44.917273 kubelet[2454]: E0113 21:14:44.917252 2454 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:14:44.947942 kubelet[2454]: I0113 21:14:44.947914 2454 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:14:44.948135 kubelet[2454]: I0113 21:14:44.948108 2454 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:14:44.948206 kubelet[2454]: I0113 21:14:44.948197 2454 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:14:44.948441 kubelet[2454]: I0113 21:14:44.948373 2454 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:14:44.948532 kubelet[2454]: I0113 21:14:44.948503 2454 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:14:44.948586 kubelet[2454]: I0113 21:14:44.948577 2454 policy_none.go:49] "None policy: Start" Jan 13 21:14:44.950114 kubelet[2454]: I0113 21:14:44.950101 2454 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:14:44.950245 kubelet[2454]: I0113 21:14:44.950233 2454 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:14:44.950460 kubelet[2454]: I0113 21:14:44.950441 2454 state_mem.go:75] "Updated machine memory state" Jan 13 21:14:44.955618 kubelet[2454]: I0113 21:14:44.954657 2454 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:14:44.955618 kubelet[2454]: I0113 21:14:44.954822 2454 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:14:44.955618 kubelet[2454]: I0113 21:14:44.954833 2454 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:14:44.955618 kubelet[2454]: I0113 21:14:44.955357 2454 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:14:45.058563 kubelet[2454]: I0113 21:14:45.058528 2454 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:14:45.067854 kubelet[2454]: I0113 21:14:45.067826 2454 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 21:14:45.068054 kubelet[2454]: I0113 21:14:45.068040 2454 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:14:45.210408 kubelet[2454]: I0113 21:14:45.210291 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49f33b161564998a529f6ce3388c49b4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49f33b161564998a529f6ce3388c49b4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:14:45.210408 kubelet[2454]: I0113 21:14:45.210334 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:45.210408 kubelet[2454]: I0113 21:14:45.210352 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:14:45.210408 kubelet[2454]: I0113 21:14:45.210368 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49f33b161564998a529f6ce3388c49b4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49f33b161564998a529f6ce3388c49b4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:14:45.210408 kubelet[2454]: I0113 21:14:45.210389 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49f33b161564998a529f6ce3388c49b4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49f33b161564998a529f6ce3388c49b4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:14:45.210597 kubelet[2454]: I0113 21:14:45.210407 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:45.210597 kubelet[2454]: I0113 21:14:45.210423 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:45.210597 kubelet[2454]: I0113 21:14:45.210438 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:45.210597 kubelet[2454]: I0113 21:14:45.210454 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:14:45.332588 kubelet[2454]: E0113 21:14:45.332543 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:45.332738 kubelet[2454]: E0113 21:14:45.332543 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:45.332738 kubelet[2454]: E0113 21:14:45.332729 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:45.880938 kubelet[2454]: I0113 21:14:45.880887 2454 apiserver.go:52] "Watching apiserver" Jan 13 21:14:45.910942 kubelet[2454]: I0113 21:14:45.910863 2454 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:14:45.932196 kubelet[2454]: E0113 21:14:45.932169 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:45.933763 kubelet[2454]: E0113 21:14:45.933742 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:45.941237 kubelet[2454]: E0113 21:14:45.941201 2454 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:14:45.941384 kubelet[2454]: E0113 21:14:45.941365 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:45.968542 kubelet[2454]: I0113 21:14:45.968406 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.968388229 podStartE2EDuration="968.388229ms" podCreationTimestamp="2025-01-13 21:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:14:45.956252566 +0000 UTC m=+1.133330108" watchObservedRunningTime="2025-01-13 21:14:45.968388229 +0000 UTC m=+1.145465811" Jan 13 21:14:45.981715 kubelet[2454]: I0113 21:14:45.981634 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.981615803 podStartE2EDuration="981.615803ms" podCreationTimestamp="2025-01-13 21:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:14:45.968564274 +0000 UTC m=+1.145641856" watchObservedRunningTime="2025-01-13 21:14:45.981615803 +0000 UTC m=+1.158693385" Jan 13 21:14:45.999006 kubelet[2454]: I0113 21:14:45.996806 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.996790632 podStartE2EDuration="996.790632ms" podCreationTimestamp="2025-01-13 21:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:14:45.983963869 +0000 UTC m=+1.161041491" watchObservedRunningTime="2025-01-13 21:14:45.996790632 +0000 UTC m=+1.173868214" Jan 13 21:14:46.934722 kubelet[2454]: E0113 21:14:46.933331 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:46.935125 kubelet[2454]: E0113 21:14:46.934439 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:49.617596 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 13 21:14:49.620461 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 13 21:14:49.625876 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:39282.service: Deactivated successfully. Jan 13 21:14:49.627440 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:14:49.627596 systemd[1]: session-7.scope: Consumed 8.251s CPU time, 153.3M memory peak, 0B memory swap peak. Jan 13 21:14:49.628082 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:14:49.629112 systemd-logind[1418]: Removed session 7. Jan 13 21:14:49.797855 kubelet[2454]: I0113 21:14:49.797815 2454 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:14:49.798461 containerd[1434]: time="2025-01-13T21:14:49.798370832Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:14:49.798716 kubelet[2454]: I0113 21:14:49.798546 2454 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:14:50.661453 systemd[1]: Created slice kubepods-besteffort-podc4aca77b_a542_461e_9da9_34707cb6d20e.slice - libcontainer container kubepods-besteffort-podc4aca77b_a542_461e_9da9_34707cb6d20e.slice. Jan 13 21:14:50.780783 systemd[1]: Created slice kubepods-besteffort-pode987440e_d5b7_442f_9479_9531dc12169d.slice - libcontainer container kubepods-besteffort-pode987440e_d5b7_442f_9479_9531dc12169d.slice. Jan 13 21:14:50.845532 kubelet[2454]: I0113 21:14:50.845461 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4aca77b-a542-461e-9da9-34707cb6d20e-lib-modules\") pod \"kube-proxy-dnzkm\" (UID: \"c4aca77b-a542-461e-9da9-34707cb6d20e\") " pod="kube-system/kube-proxy-dnzkm" Jan 13 21:14:50.845532 kubelet[2454]: I0113 21:14:50.845530 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4aca77b-a542-461e-9da9-34707cb6d20e-kube-proxy\") pod \"kube-proxy-dnzkm\" (UID: \"c4aca77b-a542-461e-9da9-34707cb6d20e\") " pod="kube-system/kube-proxy-dnzkm" Jan 13 21:14:50.845895 kubelet[2454]: I0113 21:14:50.845551 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4aca77b-a542-461e-9da9-34707cb6d20e-xtables-lock\") pod \"kube-proxy-dnzkm\" (UID: \"c4aca77b-a542-461e-9da9-34707cb6d20e\") " pod="kube-system/kube-proxy-dnzkm" Jan 13 21:14:50.845895 kubelet[2454]: I0113 21:14:50.845567 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67qq\" (UniqueName: \"kubernetes.io/projected/c4aca77b-a542-461e-9da9-34707cb6d20e-kube-api-access-s67qq\") pod \"kube-proxy-dnzkm\" (UID: \"c4aca77b-a542-461e-9da9-34707cb6d20e\") " pod="kube-system/kube-proxy-dnzkm" Jan 13 21:14:50.946197 kubelet[2454]: I0113 21:14:50.946088 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e987440e-d5b7-442f-9479-9531dc12169d-var-lib-calico\") pod \"tigera-operator-76c4976dd7-l994t\" (UID: \"e987440e-d5b7-442f-9479-9531dc12169d\") " pod="tigera-operator/tigera-operator-76c4976dd7-l994t" Jan 13 21:14:50.946197 kubelet[2454]: I0113 21:14:50.946124 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmml\" (UniqueName: \"kubernetes.io/projected/e987440e-d5b7-442f-9479-9531dc12169d-kube-api-access-mwmml\") pod \"tigera-operator-76c4976dd7-l994t\" (UID: \"e987440e-d5b7-442f-9479-9531dc12169d\") " pod="tigera-operator/tigera-operator-76c4976dd7-l994t" Jan 13 21:14:50.974082 kubelet[2454]: E0113 21:14:50.974043 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:50.974741 containerd[1434]: time="2025-01-13T21:14:50.974436992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnzkm,Uid:c4aca77b-a542-461e-9da9-34707cb6d20e,Namespace:kube-system,Attempt:0,}" Jan 13 21:14:50.991874 containerd[1434]: time="2025-01-13T21:14:50.991762386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:50.992358 containerd[1434]: time="2025-01-13T21:14:50.992191475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:50.992358 containerd[1434]: time="2025-01-13T21:14:50.992214116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:50.992358 containerd[1434]: time="2025-01-13T21:14:50.992317318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:51.013937 systemd[1]: Started cri-containerd-0090bb983763637273417f81d9168a43be70be2db95767d468082b6c65977177.scope - libcontainer container 0090bb983763637273417f81d9168a43be70be2db95767d468082b6c65977177. Jan 13 21:14:51.029891 containerd[1434]: time="2025-01-13T21:14:51.029856810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnzkm,Uid:c4aca77b-a542-461e-9da9-34707cb6d20e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0090bb983763637273417f81d9168a43be70be2db95767d468082b6c65977177\"" Jan 13 21:14:51.030523 kubelet[2454]: E0113 21:14:51.030501 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:51.034055 containerd[1434]: time="2025-01-13T21:14:51.034020210Z" level=info msg="CreateContainer within sandbox \"0090bb983763637273417f81d9168a43be70be2db95767d468082b6c65977177\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:14:51.063773 containerd[1434]: time="2025-01-13T21:14:51.063726181Z" level=info msg="CreateContainer within sandbox \"0090bb983763637273417f81d9168a43be70be2db95767d468082b6c65977177\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f2081aa11484eebf0acd9b374a8393184ea305975dd6000abc8fd4781c577a28\"" Jan 13 21:14:51.064946 containerd[1434]: time="2025-01-13T21:14:51.064914243Z" level=info msg="StartContainer for \"f2081aa11484eebf0acd9b374a8393184ea305975dd6000abc8fd4781c577a28\"" Jan 13 21:14:51.083587 containerd[1434]: time="2025-01-13T21:14:51.083531761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-l994t,Uid:e987440e-d5b7-442f-9479-9531dc12169d,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:14:51.086865 systemd[1]: Started cri-containerd-f2081aa11484eebf0acd9b374a8393184ea305975dd6000abc8fd4781c577a28.scope - libcontainer container f2081aa11484eebf0acd9b374a8393184ea305975dd6000abc8fd4781c577a28. Jan 13 21:14:51.106265 containerd[1434]: time="2025-01-13T21:14:51.106187996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:51.106375 containerd[1434]: time="2025-01-13T21:14:51.106321958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:51.106375 containerd[1434]: time="2025-01-13T21:14:51.106350319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:51.106506 containerd[1434]: time="2025-01-13T21:14:51.106464401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:51.110166 containerd[1434]: time="2025-01-13T21:14:51.110071430Z" level=info msg="StartContainer for \"f2081aa11484eebf0acd9b374a8393184ea305975dd6000abc8fd4781c577a28\" returns successfully" Jan 13 21:14:51.130884 systemd[1]: Started cri-containerd-8159ec38c3740fe88fe7284e5a08181a465b4ea1e9ede5026dbafc9e0f6e7da9.scope - libcontainer container 8159ec38c3740fe88fe7284e5a08181a465b4ea1e9ede5026dbafc9e0f6e7da9. Jan 13 21:14:51.164153 containerd[1434]: time="2025-01-13T21:14:51.164104508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-l994t,Uid:e987440e-d5b7-442f-9479-9531dc12169d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8159ec38c3740fe88fe7284e5a08181a465b4ea1e9ede5026dbafc9e0f6e7da9\"" Jan 13 21:14:51.166413 containerd[1434]: time="2025-01-13T21:14:51.166325550Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:14:51.942543 kubelet[2454]: E0113 21:14:51.942510 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:52.290526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590924876.mount: Deactivated successfully. Jan 13 21:14:52.536071 update_engine[1423]: I20250113 21:14:52.535470 1423 update_attempter.cc:509] Updating boot flags... Jan 13 21:14:52.563815 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2805) Jan 13 21:14:52.622724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2808) Jan 13 21:14:52.637803 containerd[1434]: time="2025-01-13T21:14:52.637758075Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:52.639048 containerd[1434]: time="2025-01-13T21:14:52.638859615Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125976" Jan 13 21:14:52.639827 containerd[1434]: time="2025-01-13T21:14:52.639793552Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:52.642112 containerd[1434]: time="2025-01-13T21:14:52.642078313Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:52.642975 containerd[1434]: time="2025-01-13T21:14:52.642947809Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.476589137s" Jan 13 21:14:52.643033 containerd[1434]: time="2025-01-13T21:14:52.642982049Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 13 21:14:52.647605 containerd[1434]: time="2025-01-13T21:14:52.647556691Z" level=info msg="CreateContainer within sandbox \"8159ec38c3740fe88fe7284e5a08181a465b4ea1e9ede5026dbafc9e0f6e7da9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:14:52.657683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489069105.mount: Deactivated successfully. Jan 13 21:14:52.658474 containerd[1434]: time="2025-01-13T21:14:52.658427407Z" level=info msg="CreateContainer within sandbox \"8159ec38c3740fe88fe7284e5a08181a465b4ea1e9ede5026dbafc9e0f6e7da9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d91f1c9d986260efeccbb52a1b35c4a4a146de17d5476cdad3b44018354cecdb\"" Jan 13 21:14:52.658928 containerd[1434]: time="2025-01-13T21:14:52.658855775Z" level=info msg="StartContainer for \"d91f1c9d986260efeccbb52a1b35c4a4a146de17d5476cdad3b44018354cecdb\"" Jan 13 21:14:52.680853 systemd[1]: Started cri-containerd-d91f1c9d986260efeccbb52a1b35c4a4a146de17d5476cdad3b44018354cecdb.scope - libcontainer container d91f1c9d986260efeccbb52a1b35c4a4a146de17d5476cdad3b44018354cecdb. Jan 13 21:14:52.700950 containerd[1434]: time="2025-01-13T21:14:52.700913572Z" level=info msg="StartContainer for \"d91f1c9d986260efeccbb52a1b35c4a4a146de17d5476cdad3b44018354cecdb\" returns successfully" Jan 13 21:14:52.912097 kubelet[2454]: E0113 21:14:52.911936 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:52.927849 kubelet[2454]: I0113 21:14:52.927790 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dnzkm" podStartSLOduration=2.927776335 podStartE2EDuration="2.927776335s" podCreationTimestamp="2025-01-13 21:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:14:51.951727828 +0000 UTC m=+7.128805450" watchObservedRunningTime="2025-01-13 21:14:52.927776335 +0000 UTC m=+8.104853917" Jan 13 21:14:52.944886 kubelet[2454]: E0113 21:14:52.944677 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:55.871359 kubelet[2454]: E0113 21:14:55.871220 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:55.883245 kubelet[2454]: I0113 21:14:55.882966 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-l994t" podStartSLOduration=4.402887183 podStartE2EDuration="5.882952625s" podCreationTimestamp="2025-01-13 21:14:50 +0000 UTC" firstStartedPulling="2025-01-13 21:14:51.165240369 +0000 UTC m=+6.342317951" lastFinishedPulling="2025-01-13 21:14:52.645305811 +0000 UTC m=+7.822383393" observedRunningTime="2025-01-13 21:14:52.961067414 +0000 UTC m=+8.138145116" watchObservedRunningTime="2025-01-13 21:14:55.882952625 +0000 UTC m=+11.060030207" Jan 13 21:14:56.194054 kubelet[2454]: E0113 21:14:56.193949 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:57.219595 systemd[1]: Created slice kubepods-besteffort-podd1d252b8_7d3d_48db_a42f_5da0b52f77ee.slice - libcontainer container kubepods-besteffort-podd1d252b8_7d3d_48db_a42f_5da0b52f77ee.slice. Jan 13 21:14:57.289146 kubelet[2454]: I0113 21:14:57.289065 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d252b8-7d3d-48db-a42f-5da0b52f77ee-tigera-ca-bundle\") pod \"calico-typha-6d9f45bd8-2c26v\" (UID: \"d1d252b8-7d3d-48db-a42f-5da0b52f77ee\") " pod="calico-system/calico-typha-6d9f45bd8-2c26v" Jan 13 21:14:57.289146 kubelet[2454]: I0113 21:14:57.289118 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbkzr\" (UniqueName: \"kubernetes.io/projected/d1d252b8-7d3d-48db-a42f-5da0b52f77ee-kube-api-access-kbkzr\") pod \"calico-typha-6d9f45bd8-2c26v\" (UID: \"d1d252b8-7d3d-48db-a42f-5da0b52f77ee\") " pod="calico-system/calico-typha-6d9f45bd8-2c26v" Jan 13 21:14:57.289598 kubelet[2454]: I0113 21:14:57.289176 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d1d252b8-7d3d-48db-a42f-5da0b52f77ee-typha-certs\") pod \"calico-typha-6d9f45bd8-2c26v\" (UID: \"d1d252b8-7d3d-48db-a42f-5da0b52f77ee\") " pod="calico-system/calico-typha-6d9f45bd8-2c26v" Jan 13 21:14:57.410622 systemd[1]: Created slice kubepods-besteffort-pod95860bd9_66d4_4aeb_a4c1_d884e38cc6f0.slice - libcontainer container kubepods-besteffort-pod95860bd9_66d4_4aeb_a4c1_d884e38cc6f0.slice. Jan 13 21:14:57.490813 kubelet[2454]: I0113 21:14:57.490688 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-lib-modules\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.490813 kubelet[2454]: I0113 21:14:57.490737 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-tigera-ca-bundle\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.490813 kubelet[2454]: I0113 21:14:57.490755 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-cni-bin-dir\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.490813 kubelet[2454]: I0113 21:14:57.490776 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-var-run-calico\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.490813 kubelet[2454]: I0113 21:14:57.490791 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-var-lib-calico\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.491023 kubelet[2454]: I0113 21:14:57.490819 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-cni-log-dir\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.491023 kubelet[2454]: I0113 21:14:57.490838 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-cni-net-dir\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.491023 kubelet[2454]: I0113 21:14:57.490863 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-flexvol-driver-host\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.491023 kubelet[2454]: I0113 21:14:57.490891 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrxj\" (UniqueName: \"kubernetes.io/projected/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-kube-api-access-jxrxj\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.491023 kubelet[2454]: I0113 21:14:57.490909 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-node-certs\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.491133 kubelet[2454]: I0113 21:14:57.490925 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-xtables-lock\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.491133 kubelet[2454]: I0113 21:14:57.490940 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/95860bd9-66d4-4aeb-a4c1-d884e38cc6f0-policysync\") pod \"calico-node-ddtzj\" (UID: \"95860bd9-66d4-4aeb-a4c1-d884e38cc6f0\") " pod="calico-system/calico-node-ddtzj" Jan 13 21:14:57.526010 kubelet[2454]: E0113 21:14:57.525978 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:57.526740 containerd[1434]: time="2025-01-13T21:14:57.526467087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d9f45bd8-2c26v,Uid:d1d252b8-7d3d-48db-a42f-5da0b52f77ee,Namespace:calico-system,Attempt:0,}" Jan 13 21:14:57.548480 containerd[1434]: time="2025-01-13T21:14:57.548334612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:57.548480 containerd[1434]: time="2025-01-13T21:14:57.548464413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:57.548691 containerd[1434]: time="2025-01-13T21:14:57.548487494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:57.548691 containerd[1434]: time="2025-01-13T21:14:57.548583295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:57.565857 systemd[1]: Started cri-containerd-74cf5e08b4123334c3dd7e15f7edbe9de4cbb3567162bdce9c8257b571a6a421.scope - libcontainer container 74cf5e08b4123334c3dd7e15f7edbe9de4cbb3567162bdce9c8257b571a6a421. Jan 13 21:14:57.603505 kubelet[2454]: E0113 21:14:57.603447 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st9qh" podUID="67345a72-9d66-4d9b-8d45-698aed92c23c" Jan 13 21:14:57.609892 kubelet[2454]: E0113 21:14:57.609637 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.609892 kubelet[2454]: W0113 21:14:57.609674 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.610033 kubelet[2454]: E0113 21:14:57.609897 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.611495 kubelet[2454]: E0113 21:14:57.611280 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.611495 kubelet[2454]: W0113 21:14:57.611307 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.611495 kubelet[2454]: E0113 21:14:57.611363 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.616850 kubelet[2454]: E0113 21:14:57.616766 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.616850 kubelet[2454]: W0113 21:14:57.616788 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.616850 kubelet[2454]: E0113 21:14:57.616805 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.671006 containerd[1434]: time="2025-01-13T21:14:57.670949130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d9f45bd8-2c26v,Uid:d1d252b8-7d3d-48db-a42f-5da0b52f77ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"74cf5e08b4123334c3dd7e15f7edbe9de4cbb3567162bdce9c8257b571a6a421\"" Jan 13 21:14:57.679101 kubelet[2454]: E0113 21:14:57.679060 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:57.684015 containerd[1434]: time="2025-01-13T21:14:57.683962699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:14:57.688918 kubelet[2454]: E0113 21:14:57.688886 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.688918 kubelet[2454]: W0113 21:14:57.688908 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.688918 kubelet[2454]: E0113 21:14:57.688928 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.689490 kubelet[2454]: E0113 21:14:57.689460 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.689490 kubelet[2454]: W0113 21:14:57.689472 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.689490 kubelet[2454]: E0113 21:14:57.689480 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.689753 kubelet[2454]: E0113 21:14:57.689691 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.689753 kubelet[2454]: W0113 21:14:57.689727 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.689753 kubelet[2454]: E0113 21:14:57.689736 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.690309 kubelet[2454]: E0113 21:14:57.689935 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.690309 kubelet[2454]: W0113 21:14:57.689946 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.690309 kubelet[2454]: E0113 21:14:57.689960 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.690309 kubelet[2454]: E0113 21:14:57.690144 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.690309 kubelet[2454]: W0113 21:14:57.690152 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.690309 kubelet[2454]: E0113 21:14:57.690161 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.690482 kubelet[2454]: E0113 21:14:57.690340 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.690482 kubelet[2454]: W0113 21:14:57.690348 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.690482 kubelet[2454]: E0113 21:14:57.690355 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.690549 kubelet[2454]: E0113 21:14:57.690525 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.690549 kubelet[2454]: W0113 21:14:57.690536 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.690549 kubelet[2454]: E0113 21:14:57.690542 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.690909 kubelet[2454]: E0113 21:14:57.690688 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.690909 kubelet[2454]: W0113 21:14:57.690711 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.690909 kubelet[2454]: E0113 21:14:57.690719 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.690909 kubelet[2454]: E0113 21:14:57.690891 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.690909 kubelet[2454]: W0113 21:14:57.690898 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.690909 kubelet[2454]: E0113 21:14:57.690906 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.691540 kubelet[2454]: E0113 21:14:57.691066 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.691540 kubelet[2454]: W0113 21:14:57.691078 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.691540 kubelet[2454]: E0113 21:14:57.691086 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.692899 kubelet[2454]: E0113 21:14:57.692868 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.692899 kubelet[2454]: W0113 21:14:57.692890 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.692899 kubelet[2454]: E0113 21:14:57.692903 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.693471 kubelet[2454]: E0113 21:14:57.693445 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.693471 kubelet[2454]: W0113 21:14:57.693463 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.693471 kubelet[2454]: E0113 21:14:57.693475 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.693758 kubelet[2454]: E0113 21:14:57.693743 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.693758 kubelet[2454]: W0113 21:14:57.693756 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.693812 kubelet[2454]: E0113 21:14:57.693767 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.693970 kubelet[2454]: E0113 21:14:57.693955 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.693970 kubelet[2454]: W0113 21:14:57.693968 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.694017 kubelet[2454]: E0113 21:14:57.693977 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.694130 kubelet[2454]: E0113 21:14:57.694118 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.694154 kubelet[2454]: W0113 21:14:57.694129 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.694154 kubelet[2454]: E0113 21:14:57.694137 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.694271 kubelet[2454]: E0113 21:14:57.694262 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.694295 kubelet[2454]: W0113 21:14:57.694271 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.694295 kubelet[2454]: E0113 21:14:57.694281 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.694436 kubelet[2454]: E0113 21:14:57.694424 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.694436 kubelet[2454]: W0113 21:14:57.694435 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.694478 kubelet[2454]: E0113 21:14:57.694443 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.694567 kubelet[2454]: E0113 21:14:57.694557 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.694590 kubelet[2454]: W0113 21:14:57.694566 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.694590 kubelet[2454]: E0113 21:14:57.694580 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.694786 kubelet[2454]: E0113 21:14:57.694777 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.694815 kubelet[2454]: W0113 21:14:57.694786 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.694815 kubelet[2454]: E0113 21:14:57.694794 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.695055 kubelet[2454]: E0113 21:14:57.695006 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.695055 kubelet[2454]: W0113 21:14:57.695026 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.695055 kubelet[2454]: E0113 21:14:57.695038 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.696301 kubelet[2454]: E0113 21:14:57.696260 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.696301 kubelet[2454]: W0113 21:14:57.696281 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.696301 kubelet[2454]: E0113 21:14:57.696294 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.696414 kubelet[2454]: I0113 21:14:57.696321 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/67345a72-9d66-4d9b-8d45-698aed92c23c-varrun\") pod \"csi-node-driver-st9qh\" (UID: \"67345a72-9d66-4d9b-8d45-698aed92c23c\") " pod="calico-system/csi-node-driver-st9qh" Jan 13 21:14:57.697305 kubelet[2454]: E0113 21:14:57.697274 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.697305 kubelet[2454]: W0113 21:14:57.697294 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.697388 kubelet[2454]: E0113 21:14:57.697314 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.697388 kubelet[2454]: I0113 21:14:57.697334 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/67345a72-9d66-4d9b-8d45-698aed92c23c-registration-dir\") pod \"csi-node-driver-st9qh\" (UID: \"67345a72-9d66-4d9b-8d45-698aed92c23c\") " pod="calico-system/csi-node-driver-st9qh" Jan 13 21:14:57.698139 kubelet[2454]: E0113 21:14:57.697922 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.698139 kubelet[2454]: W0113 21:14:57.697944 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.698139 kubelet[2454]: E0113 21:14:57.697963 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.698139 kubelet[2454]: I0113 21:14:57.697979 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/67345a72-9d66-4d9b-8d45-698aed92c23c-socket-dir\") pod \"csi-node-driver-st9qh\" (UID: \"67345a72-9d66-4d9b-8d45-698aed92c23c\") " pod="calico-system/csi-node-driver-st9qh" Jan 13 21:14:57.698742 kubelet[2454]: E0113 21:14:57.698652 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.698785 kubelet[2454]: W0113 21:14:57.698743 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.698807 kubelet[2454]: E0113 21:14:57.698781 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.698830 kubelet[2454]: I0113 21:14:57.698810 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67345a72-9d66-4d9b-8d45-698aed92c23c-kubelet-dir\") pod \"csi-node-driver-st9qh\" (UID: \"67345a72-9d66-4d9b-8d45-698aed92c23c\") " pod="calico-system/csi-node-driver-st9qh" Jan 13 21:14:57.699153 kubelet[2454]: E0113 21:14:57.699093 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.699153 kubelet[2454]: W0113 21:14:57.699108 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.699153 kubelet[2454]: E0113 21:14:57.699137 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.699293 kubelet[2454]: E0113 21:14:57.699277 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.699293 kubelet[2454]: W0113 21:14:57.699288 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.699357 kubelet[2454]: E0113 21:14:57.699313 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.699453 kubelet[2454]: E0113 21:14:57.699440 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.699476 kubelet[2454]: W0113 21:14:57.699452 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.699499 kubelet[2454]: E0113 21:14:57.699476 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.699595 kubelet[2454]: E0113 21:14:57.699585 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.699595 kubelet[2454]: W0113 21:14:57.699594 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.699639 kubelet[2454]: E0113 21:14:57.699608 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.699639 kubelet[2454]: I0113 21:14:57.699627 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvvv\" (UniqueName: \"kubernetes.io/projected/67345a72-9d66-4d9b-8d45-698aed92c23c-kube-api-access-cqvvv\") pod \"csi-node-driver-st9qh\" (UID: \"67345a72-9d66-4d9b-8d45-698aed92c23c\") " pod="calico-system/csi-node-driver-st9qh" Jan 13 21:14:57.699781 kubelet[2454]: E0113 21:14:57.699770 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.699804 kubelet[2454]: W0113 21:14:57.699781 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.699804 kubelet[2454]: E0113 21:14:57.699794 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.699942 kubelet[2454]: E0113 21:14:57.699931 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.699942 kubelet[2454]: W0113 21:14:57.699940 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.699994 kubelet[2454]: E0113 21:14:57.699948 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.700203 kubelet[2454]: E0113 21:14:57.700187 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.700229 kubelet[2454]: W0113 21:14:57.700202 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.700229 kubelet[2454]: E0113 21:14:57.700217 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.700358 kubelet[2454]: E0113 21:14:57.700347 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.700383 kubelet[2454]: W0113 21:14:57.700358 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.700383 kubelet[2454]: E0113 21:14:57.700366 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.700513 kubelet[2454]: E0113 21:14:57.700503 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.700534 kubelet[2454]: W0113 21:14:57.700513 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.700534 kubelet[2454]: E0113 21:14:57.700521 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.700692 kubelet[2454]: E0113 21:14:57.700681 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.700752 kubelet[2454]: W0113 21:14:57.700692 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.700780 kubelet[2454]: E0113 21:14:57.700755 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.700928 kubelet[2454]: E0113 21:14:57.700916 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.700928 kubelet[2454]: W0113 21:14:57.700928 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.700977 kubelet[2454]: E0113 21:14:57.700936 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.720267 kubelet[2454]: E0113 21:14:57.720231 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:57.720768 containerd[1434]: time="2025-01-13T21:14:57.720724699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ddtzj,Uid:95860bd9-66d4-4aeb-a4c1-d884e38cc6f0,Namespace:calico-system,Attempt:0,}" Jan 13 21:14:57.741307 containerd[1434]: time="2025-01-13T21:14:57.740817280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:57.741307 containerd[1434]: time="2025-01-13T21:14:57.741201125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:57.741307 containerd[1434]: time="2025-01-13T21:14:57.741214646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:57.741791 containerd[1434]: time="2025-01-13T21:14:57.741745973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:57.763953 systemd[1]: Started cri-containerd-1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb.scope - libcontainer container 1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb. Jan 13 21:14:57.787889 containerd[1434]: time="2025-01-13T21:14:57.787839653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ddtzj,Uid:95860bd9-66d4-4aeb-a4c1-d884e38cc6f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb\"" Jan 13 21:14:57.789244 kubelet[2454]: E0113 21:14:57.789061 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:14:57.800657 kubelet[2454]: E0113 21:14:57.800630 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.800657 kubelet[2454]: W0113 21:14:57.800652 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.800856 kubelet[2454]: E0113 21:14:57.800670 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.800911 kubelet[2454]: E0113 21:14:57.800904 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.800941 kubelet[2454]: W0113 21:14:57.800914 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.800941 kubelet[2454]: E0113 21:14:57.800931 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.801157 kubelet[2454]: E0113 21:14:57.801139 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.801157 kubelet[2454]: W0113 21:14:57.801153 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.801339 kubelet[2454]: E0113 21:14:57.801168 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.801442 kubelet[2454]: E0113 21:14:57.801425 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.801510 kubelet[2454]: W0113 21:14:57.801498 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.801646 kubelet[2454]: E0113 21:14:57.801568 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.801822 kubelet[2454]: E0113 21:14:57.801809 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.801918 kubelet[2454]: W0113 21:14:57.801902 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.802045 kubelet[2454]: E0113 21:14:57.801982 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.802229 kubelet[2454]: E0113 21:14:57.802216 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.802341 kubelet[2454]: W0113 21:14:57.802286 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.802341 kubelet[2454]: E0113 21:14:57.802310 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.802512 kubelet[2454]: E0113 21:14:57.802496 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.802554 kubelet[2454]: W0113 21:14:57.802513 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.802554 kubelet[2454]: E0113 21:14:57.802531 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.802685 kubelet[2454]: E0113 21:14:57.802673 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.802745 kubelet[2454]: W0113 21:14:57.802685 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.802770 kubelet[2454]: E0113 21:14:57.802757 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.802898 kubelet[2454]: E0113 21:14:57.802884 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.802898 kubelet[2454]: W0113 21:14:57.802897 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.802977 kubelet[2454]: E0113 21:14:57.802940 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.803511 kubelet[2454]: E0113 21:14:57.803494 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.803593 kubelet[2454]: W0113 21:14:57.803511 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.803593 kubelet[2454]: E0113 21:14:57.803561 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.803769 kubelet[2454]: E0113 21:14:57.803755 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.803769 kubelet[2454]: W0113 21:14:57.803768 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.803865 kubelet[2454]: E0113 21:14:57.803825 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.803936 kubelet[2454]: E0113 21:14:57.803924 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.803936 kubelet[2454]: W0113 21:14:57.803936 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.804005 kubelet[2454]: E0113 21:14:57.803969 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.804088 kubelet[2454]: E0113 21:14:57.804076 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.804088 kubelet[2454]: W0113 21:14:57.804087 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.804142 kubelet[2454]: E0113 21:14:57.804102 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.804312 kubelet[2454]: E0113 21:14:57.804300 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.804312 kubelet[2454]: W0113 21:14:57.804311 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.804387 kubelet[2454]: E0113 21:14:57.804332 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.804473 kubelet[2454]: E0113 21:14:57.804463 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.804473 kubelet[2454]: W0113 21:14:57.804472 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.804672 kubelet[2454]: E0113 21:14:57.804496 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.804672 kubelet[2454]: E0113 21:14:57.804601 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.804672 kubelet[2454]: W0113 21:14:57.804608 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.804672 kubelet[2454]: E0113 21:14:57.804621 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.804838 kubelet[2454]: E0113 21:14:57.804765 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.804838 kubelet[2454]: W0113 21:14:57.804772 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.804838 kubelet[2454]: E0113 21:14:57.804787 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.805113 kubelet[2454]: E0113 21:14:57.805096 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.805113 kubelet[2454]: W0113 21:14:57.805113 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.805168 kubelet[2454]: E0113 21:14:57.805130 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.805316 kubelet[2454]: E0113 21:14:57.805302 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.805354 kubelet[2454]: W0113 21:14:57.805317 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.805354 kubelet[2454]: E0113 21:14:57.805333 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.805507 kubelet[2454]: E0113 21:14:57.805497 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.805507 kubelet[2454]: W0113 21:14:57.805506 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.805570 kubelet[2454]: E0113 21:14:57.805518 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.805729 kubelet[2454]: E0113 21:14:57.805712 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.805729 kubelet[2454]: W0113 21:14:57.805727 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.805798 kubelet[2454]: E0113 21:14:57.805747 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.805906 kubelet[2454]: E0113 21:14:57.805895 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.805906 kubelet[2454]: W0113 21:14:57.805905 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.805951 kubelet[2454]: E0113 21:14:57.805918 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.806137 kubelet[2454]: E0113 21:14:57.806123 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.806137 kubelet[2454]: W0113 21:14:57.806135 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.806209 kubelet[2454]: E0113 21:14:57.806188 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.806359 kubelet[2454]: E0113 21:14:57.806345 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.806359 kubelet[2454]: W0113 21:14:57.806357 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.806426 kubelet[2454]: E0113 21:14:57.806411 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.806573 kubelet[2454]: E0113 21:14:57.806562 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.806573 kubelet[2454]: W0113 21:14:57.806572 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.806637 kubelet[2454]: E0113 21:14:57.806582 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:57.820546 kubelet[2454]: E0113 21:14:57.820510 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:14:57.820546 kubelet[2454]: W0113 21:14:57.820530 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:14:57.820546 kubelet[2454]: E0113 21:14:57.820546 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:14:58.812641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108398379.mount: Deactivated successfully. Jan 13 21:14:58.925636 kubelet[2454]: E0113 21:14:58.925555 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st9qh" podUID="67345a72-9d66-4d9b-8d45-698aed92c23c" Jan 13 21:15:00.328493 containerd[1434]: time="2025-01-13T21:15:00.328432343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:00.389427 containerd[1434]: time="2025-01-13T21:15:00.389379677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 13 21:15:00.405335 containerd[1434]: time="2025-01-13T21:15:00.405271328Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:00.412996 containerd[1434]: time="2025-01-13T21:15:00.412880490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:00.431173 containerd[1434]: time="2025-01-13T21:15:00.430930763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.746919343s" Jan 13 21:15:00.431173 containerd[1434]: time="2025-01-13T21:15:00.430973324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 13 21:15:00.432407 containerd[1434]: time="2025-01-13T21:15:00.432381619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:15:00.462932 containerd[1434]: time="2025-01-13T21:15:00.462878147Z" level=info msg="CreateContainer within sandbox \"74cf5e08b4123334c3dd7e15f7edbe9de4cbb3567162bdce9c8257b571a6a421\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:15:00.481259 containerd[1434]: time="2025-01-13T21:15:00.481178623Z" level=info msg="CreateContainer within sandbox \"74cf5e08b4123334c3dd7e15f7edbe9de4cbb3567162bdce9c8257b571a6a421\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0cd4ee37b5c0a12e7a8e08e14faaf26730194c741ada26194c0243fdadca4100\"" Jan 13 21:15:00.482502 containerd[1434]: time="2025-01-13T21:15:00.482463517Z" level=info msg="StartContainer for \"0cd4ee37b5c0a12e7a8e08e14faaf26730194c741ada26194c0243fdadca4100\"" Jan 13 21:15:00.517219 systemd[1]: Started cri-containerd-0cd4ee37b5c0a12e7a8e08e14faaf26730194c741ada26194c0243fdadca4100.scope - libcontainer container 0cd4ee37b5c0a12e7a8e08e14faaf26730194c741ada26194c0243fdadca4100. Jan 13 21:15:00.579539 containerd[1434]: time="2025-01-13T21:15:00.579351557Z" level=info msg="StartContainer for \"0cd4ee37b5c0a12e7a8e08e14faaf26730194c741ada26194c0243fdadca4100\" returns successfully" Jan 13 21:15:00.923804 kubelet[2454]: E0113 21:15:00.923396 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st9qh" podUID="67345a72-9d66-4d9b-8d45-698aed92c23c" Jan 13 21:15:00.974403 kubelet[2454]: E0113 21:15:00.974295 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:00.991775 kubelet[2454]: I0113 21:15:00.991558 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d9f45bd8-2c26v" podStartSLOduration=1.2398219 podStartE2EDuration="3.991539144s" podCreationTimestamp="2025-01-13 21:14:57 +0000 UTC" firstStartedPulling="2025-01-13 21:14:57.679934407 +0000 UTC m=+12.857011989" lastFinishedPulling="2025-01-13 21:15:00.431651571 +0000 UTC m=+15.608729233" observedRunningTime="2025-01-13 21:15:00.990646815 +0000 UTC m=+16.167724397" watchObservedRunningTime="2025-01-13 21:15:00.991539144 +0000 UTC m=+16.168616726" Jan 13 21:15:01.020384 kubelet[2454]: E0113 21:15:01.020259 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.020384 kubelet[2454]: W0113 21:15:01.020286 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.020384 kubelet[2454]: E0113 21:15:01.020307 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.020616 kubelet[2454]: E0113 21:15:01.020604 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.020738 kubelet[2454]: W0113 21:15:01.020667 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.020738 kubelet[2454]: E0113 21:15:01.020686 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.021000 kubelet[2454]: E0113 21:15:01.020986 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.021113 kubelet[2454]: W0113 21:15:01.021057 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.021113 kubelet[2454]: E0113 21:15:01.021073 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.021488 kubelet[2454]: E0113 21:15:01.021415 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.021488 kubelet[2454]: W0113 21:15:01.021428 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.021488 kubelet[2454]: E0113 21:15:01.021438 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.021831 kubelet[2454]: E0113 21:15:01.021773 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.021831 kubelet[2454]: W0113 21:15:01.021785 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.021831 kubelet[2454]: E0113 21:15:01.021795 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.022194 kubelet[2454]: E0113 21:15:01.022142 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.022194 kubelet[2454]: W0113 21:15:01.022155 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.022194 kubelet[2454]: E0113 21:15:01.022166 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.022739 kubelet[2454]: E0113 21:15:01.022722 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.022960 kubelet[2454]: W0113 21:15:01.022775 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.022960 kubelet[2454]: E0113 21:15:01.022795 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.023530 kubelet[2454]: E0113 21:15:01.023515 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.023688 kubelet[2454]: W0113 21:15:01.023588 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.023688 kubelet[2454]: E0113 21:15:01.023603 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.024046 kubelet[2454]: E0113 21:15:01.023966 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.024170 kubelet[2454]: W0113 21:15:01.024112 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.024170 kubelet[2454]: E0113 21:15:01.024130 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.024534 kubelet[2454]: E0113 21:15:01.024470 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.024534 kubelet[2454]: W0113 21:15:01.024486 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.024534 kubelet[2454]: E0113 21:15:01.024497 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.024915 kubelet[2454]: E0113 21:15:01.024833 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.024915 kubelet[2454]: W0113 21:15:01.024856 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.024915 kubelet[2454]: E0113 21:15:01.024867 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.025252 kubelet[2454]: E0113 21:15:01.025239 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.025424 kubelet[2454]: W0113 21:15:01.025291 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.025424 kubelet[2454]: E0113 21:15:01.025304 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.026357 kubelet[2454]: E0113 21:15:01.025644 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.026357 kubelet[2454]: W0113 21:15:01.025657 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.026357 kubelet[2454]: E0113 21:15:01.025667 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.026788 kubelet[2454]: E0113 21:15:01.026674 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.026788 kubelet[2454]: W0113 21:15:01.026687 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.026788 kubelet[2454]: E0113 21:15:01.026707 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.026960 kubelet[2454]: E0113 21:15:01.026946 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.027018 kubelet[2454]: W0113 21:15:01.027007 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.027075 kubelet[2454]: E0113 21:15:01.027064 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.027395 kubelet[2454]: E0113 21:15:01.027380 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.027491 kubelet[2454]: W0113 21:15:01.027459 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.027491 kubelet[2454]: E0113 21:15:01.027475 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.027925 kubelet[2454]: E0113 21:15:01.027785 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.027925 kubelet[2454]: W0113 21:15:01.027799 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.027925 kubelet[2454]: E0113 21:15:01.027813 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.028193 kubelet[2454]: E0113 21:15:01.028163 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.028193 kubelet[2454]: W0113 21:15:01.028178 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.028605 kubelet[2454]: E0113 21:15:01.028446 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.028854 kubelet[2454]: E0113 21:15:01.028735 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.028854 kubelet[2454]: W0113 21:15:01.028748 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.028854 kubelet[2454]: E0113 21:15:01.028762 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.029029 kubelet[2454]: E0113 21:15:01.029015 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.029091 kubelet[2454]: W0113 21:15:01.029074 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.029192 kubelet[2454]: E0113 21:15:01.029136 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.029411 kubelet[2454]: E0113 21:15:01.029360 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.029411 kubelet[2454]: W0113 21:15:01.029371 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.029483 kubelet[2454]: E0113 21:15:01.029411 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.029750 kubelet[2454]: E0113 21:15:01.029679 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.029750 kubelet[2454]: W0113 21:15:01.029690 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.030074 kubelet[2454]: E0113 21:15:01.029741 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.030074 kubelet[2454]: E0113 21:15:01.029950 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.030074 kubelet[2454]: W0113 21:15:01.029965 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.030074 kubelet[2454]: E0113 21:15:01.029995 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.030519 kubelet[2454]: E0113 21:15:01.030315 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.030519 kubelet[2454]: W0113 21:15:01.030327 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.030519 kubelet[2454]: E0113 21:15:01.030347 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.030612 kubelet[2454]: E0113 21:15:01.030599 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.030646 kubelet[2454]: W0113 21:15:01.030611 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.030646 kubelet[2454]: E0113 21:15:01.030629 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.030828 kubelet[2454]: E0113 21:15:01.030815 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.030828 kubelet[2454]: W0113 21:15:01.030825 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.030895 kubelet[2454]: E0113 21:15:01.030839 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.031039 kubelet[2454]: E0113 21:15:01.031027 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.031064 kubelet[2454]: W0113 21:15:01.031038 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.031064 kubelet[2454]: E0113 21:15:01.031051 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.031295 kubelet[2454]: E0113 21:15:01.031278 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.031320 kubelet[2454]: W0113 21:15:01.031296 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.031320 kubelet[2454]: E0113 21:15:01.031312 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.031502 kubelet[2454]: E0113 21:15:01.031489 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.031524 kubelet[2454]: W0113 21:15:01.031502 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.031524 kubelet[2454]: E0113 21:15:01.031516 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.031688 kubelet[2454]: E0113 21:15:01.031678 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.031727 kubelet[2454]: W0113 21:15:01.031688 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.031727 kubelet[2454]: E0113 21:15:01.031710 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.031940 kubelet[2454]: E0113 21:15:01.031927 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.031940 kubelet[2454]: W0113 21:15:01.031937 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.032084 kubelet[2454]: E0113 21:15:01.031946 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.032143 kubelet[2454]: E0113 21:15:01.032128 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.032143 kubelet[2454]: W0113 21:15:01.032138 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.032195 kubelet[2454]: E0113 21:15:01.032146 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.032619 kubelet[2454]: E0113 21:15:01.032588 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:01.032619 kubelet[2454]: W0113 21:15:01.032603 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:01.032619 kubelet[2454]: E0113 21:15:01.032613 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:01.905683 containerd[1434]: time="2025-01-13T21:15:01.905622434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:01.910965 containerd[1434]: time="2025-01-13T21:15:01.906717645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 13 21:15:01.910965 containerd[1434]: time="2025-01-13T21:15:01.907794896Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:01.910965 containerd[1434]: time="2025-01-13T21:15:01.910816526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:01.911530 containerd[1434]: time="2025-01-13T21:15:01.911476173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.479062194s" Jan 13 21:15:01.911530 containerd[1434]: time="2025-01-13T21:15:01.911515173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 21:15:01.914215 containerd[1434]: time="2025-01-13T21:15:01.913867757Z" level=info msg="CreateContainer within sandbox \"1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:15:01.930012 containerd[1434]: time="2025-01-13T21:15:01.929918718Z" level=info msg="CreateContainer within sandbox \"1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff\"" Jan 13 21:15:01.931541 containerd[1434]: time="2025-01-13T21:15:01.930500764Z" level=info msg="StartContainer for \"20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff\"" Jan 13 21:15:01.961888 systemd[1]: Started cri-containerd-20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff.scope - libcontainer container 20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff. Jan 13 21:15:01.976564 kubelet[2454]: E0113 21:15:01.976518 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:01.991661 containerd[1434]: time="2025-01-13T21:15:01.991613060Z" level=info msg="StartContainer for \"20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff\" returns successfully" Jan 13 21:15:02.036568 kubelet[2454]: E0113 21:15:02.034018 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.036568 kubelet[2454]: W0113 21:15:02.034041 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.036568 kubelet[2454]: E0113 21:15:02.034059 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.036568 kubelet[2454]: E0113 21:15:02.034284 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.036568 kubelet[2454]: W0113 21:15:02.034293 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.036568 kubelet[2454]: E0113 21:15:02.034303 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.036568 kubelet[2454]: E0113 21:15:02.034484 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.036568 kubelet[2454]: W0113 21:15:02.034492 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.036568 kubelet[2454]: E0113 21:15:02.034500 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.036568 kubelet[2454]: E0113 21:15:02.034651 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.036920 kubelet[2454]: W0113 21:15:02.034661 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.036920 kubelet[2454]: E0113 21:15:02.034668 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.036920 kubelet[2454]: E0113 21:15:02.034891 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.036920 kubelet[2454]: W0113 21:15:02.034901 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.036920 kubelet[2454]: E0113 21:15:02.034909 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.036920 kubelet[2454]: E0113 21:15:02.035065 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.036920 kubelet[2454]: W0113 21:15:02.035072 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.036920 kubelet[2454]: E0113 21:15:02.035080 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.036920 kubelet[2454]: E0113 21:15:02.035298 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.036920 kubelet[2454]: W0113 21:15:02.035308 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037148 kubelet[2454]: E0113 21:15:02.035317 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037148 kubelet[2454]: E0113 21:15:02.035480 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037148 kubelet[2454]: W0113 21:15:02.035488 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037148 kubelet[2454]: E0113 21:15:02.035497 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037148 kubelet[2454]: E0113 21:15:02.035665 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037148 kubelet[2454]: W0113 21:15:02.035675 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037148 kubelet[2454]: E0113 21:15:02.035684 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037148 kubelet[2454]: E0113 21:15:02.035828 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037148 kubelet[2454]: W0113 21:15:02.035848 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037148 kubelet[2454]: E0113 21:15:02.035856 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037357 kubelet[2454]: E0113 21:15:02.036001 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037357 kubelet[2454]: W0113 21:15:02.036008 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037357 kubelet[2454]: E0113 21:15:02.036016 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037357 kubelet[2454]: E0113 21:15:02.036145 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037357 kubelet[2454]: W0113 21:15:02.036210 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037357 kubelet[2454]: E0113 21:15:02.036220 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037357 kubelet[2454]: E0113 21:15:02.036390 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037357 kubelet[2454]: W0113 21:15:02.036397 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037357 kubelet[2454]: E0113 21:15:02.036405 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037357 kubelet[2454]: E0113 21:15:02.036552 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037560 kubelet[2454]: W0113 21:15:02.036559 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037560 kubelet[2454]: E0113 21:15:02.036566 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037560 kubelet[2454]: E0113 21:15:02.036693 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037560 kubelet[2454]: W0113 21:15:02.036729 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037560 kubelet[2454]: E0113 21:15:02.036736 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037560 kubelet[2454]: E0113 21:15:02.036940 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037560 kubelet[2454]: W0113 21:15:02.036949 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037560 kubelet[2454]: E0113 21:15:02.036957 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037560 kubelet[2454]: E0113 21:15:02.037152 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037560 kubelet[2454]: W0113 21:15:02.037160 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037797 kubelet[2454]: E0113 21:15:02.037169 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037797 kubelet[2454]: E0113 21:15:02.037371 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037797 kubelet[2454]: W0113 21:15:02.037387 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037797 kubelet[2454]: E0113 21:15:02.037397 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037797 kubelet[2454]: E0113 21:15:02.037568 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037797 kubelet[2454]: W0113 21:15:02.037591 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037797 kubelet[2454]: E0113 21:15:02.037601 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.037797 kubelet[2454]: E0113 21:15:02.037773 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.037797 kubelet[2454]: W0113 21:15:02.037791 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.037797 kubelet[2454]: E0113 21:15:02.037801 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.038037 kubelet[2454]: E0113 21:15:02.037968 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.038037 kubelet[2454]: W0113 21:15:02.037977 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.038037 kubelet[2454]: E0113 21:15:02.037987 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.038195 kubelet[2454]: E0113 21:15:02.038162 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.038195 kubelet[2454]: W0113 21:15:02.038176 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.038260 kubelet[2454]: E0113 21:15:02.038231 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.038469 kubelet[2454]: E0113 21:15:02.038454 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.038504 kubelet[2454]: W0113 21:15:02.038469 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.038504 kubelet[2454]: E0113 21:15:02.038494 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.038661 kubelet[2454]: E0113 21:15:02.038648 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.038661 kubelet[2454]: W0113 21:15:02.038656 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.038661 kubelet[2454]: E0113 21:15:02.038669 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.039232 kubelet[2454]: E0113 21:15:02.039037 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.039232 kubelet[2454]: W0113 21:15:02.039052 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.039232 kubelet[2454]: E0113 21:15:02.039073 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.039409 kubelet[2454]: E0113 21:15:02.039397 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.039563 kubelet[2454]: W0113 21:15:02.039449 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.039563 kubelet[2454]: E0113 21:15:02.039517 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.039908 kubelet[2454]: E0113 21:15:02.039811 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.039908 kubelet[2454]: W0113 21:15:02.039824 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.039908 kubelet[2454]: E0113 21:15:02.039849 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.040230 kubelet[2454]: E0113 21:15:02.040216 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.040334 kubelet[2454]: W0113 21:15:02.040280 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.040334 kubelet[2454]: E0113 21:15:02.040312 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.040590 kubelet[2454]: E0113 21:15:02.040532 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.040590 kubelet[2454]: W0113 21:15:02.040544 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.040590 kubelet[2454]: E0113 21:15:02.040554 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.040950 kubelet[2454]: E0113 21:15:02.040871 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.040950 kubelet[2454]: W0113 21:15:02.040883 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.040950 kubelet[2454]: E0113 21:15:02.040898 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.041069 kubelet[2454]: E0113 21:15:02.041053 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.041069 kubelet[2454]: W0113 21:15:02.041067 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.041144 kubelet[2454]: E0113 21:15:02.041085 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.041242 kubelet[2454]: E0113 21:15:02.041226 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.041242 kubelet[2454]: W0113 21:15:02.041234 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.041242 kubelet[2454]: E0113 21:15:02.041242 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.041527 kubelet[2454]: E0113 21:15:02.041514 2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:15:02.041527 kubelet[2454]: W0113 21:15:02.041526 2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:15:02.041587 kubelet[2454]: E0113 21:15:02.041535 2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:15:02.056072 systemd[1]: cri-containerd-20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff.scope: Deactivated successfully. Jan 13 21:15:02.088045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff-rootfs.mount: Deactivated successfully. Jan 13 21:15:02.099087 containerd[1434]: time="2025-01-13T21:15:02.099022159Z" level=info msg="shim disconnected" id=20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff namespace=k8s.io Jan 13 21:15:02.099087 containerd[1434]: time="2025-01-13T21:15:02.099083520Z" level=warning msg="cleaning up after shim disconnected" id=20c6760f3b65421adde596d46beec78be27ebef000e6327b368ffc4e0244deff namespace=k8s.io Jan 13 21:15:02.099087 containerd[1434]: time="2025-01-13T21:15:02.099093320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:15:02.919257 kubelet[2454]: E0113 21:15:02.919196 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st9qh" podUID="67345a72-9d66-4d9b-8d45-698aed92c23c" Jan 13 21:15:02.979898 kubelet[2454]: E0113 21:15:02.979575 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:02.979898 kubelet[2454]: E0113 21:15:02.979643 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:02.981393 containerd[1434]: time="2025-01-13T21:15:02.981360648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:15:04.920733 kubelet[2454]: E0113 21:15:04.917953 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st9qh" podUID="67345a72-9d66-4d9b-8d45-698aed92c23c" Jan 13 21:15:06.918450 kubelet[2454]: E0113 21:15:06.917997 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st9qh" podUID="67345a72-9d66-4d9b-8d45-698aed92c23c" Jan 13 21:15:07.473320 containerd[1434]: time="2025-01-13T21:15:07.473270636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:07.473759 containerd[1434]: time="2025-01-13T21:15:07.473720959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 21:15:07.474504 containerd[1434]: time="2025-01-13T21:15:07.474482084Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:07.476383 containerd[1434]: time="2025-01-13T21:15:07.476333497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:07.477348 containerd[1434]: time="2025-01-13T21:15:07.477160262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.495764854s" Jan 13 21:15:07.477348 containerd[1434]: time="2025-01-13T21:15:07.477190423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 21:15:07.480264 containerd[1434]: time="2025-01-13T21:15:07.480239283Z" level=info msg="CreateContainer within sandbox \"1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:15:07.492026 containerd[1434]: time="2025-01-13T21:15:07.491959204Z" level=info msg="CreateContainer within sandbox \"1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277\"" Jan 13 21:15:07.492464 containerd[1434]: time="2025-01-13T21:15:07.492422767Z" level=info msg="StartContainer for \"510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277\"" Jan 13 21:15:07.523883 systemd[1]: Started cri-containerd-510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277.scope - libcontainer container 510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277. Jan 13 21:15:07.545391 containerd[1434]: time="2025-01-13T21:15:07.545347689Z" level=info msg="StartContainer for \"510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277\" returns successfully" Jan 13 21:15:07.989340 kubelet[2454]: E0113 21:15:07.989300 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:08.056668 containerd[1434]: time="2025-01-13T21:15:08.056596080Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:15:08.058866 systemd[1]: cri-containerd-510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277.scope: Deactivated successfully. Jan 13 21:15:08.076565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277-rootfs.mount: Deactivated successfully. Jan 13 21:15:08.086203 kubelet[2454]: I0113 21:15:08.086169 2454 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:15:08.134165 systemd[1]: Created slice kubepods-burstable-pod6546b16e_ed32_4e3e_8156_6c685dd971ab.slice - libcontainer container kubepods-burstable-pod6546b16e_ed32_4e3e_8156_6c685dd971ab.slice. Jan 13 21:15:08.148510 systemd[1]: Created slice kubepods-burstable-pod6deed8be_443c_4c20_8288_97ef2040b5e5.slice - libcontainer container kubepods-burstable-pod6deed8be_443c_4c20_8288_97ef2040b5e5.slice. Jan 13 21:15:08.170862 systemd[1]: Created slice kubepods-besteffort-podfc42e8bc_befe_48fd_a086_67842b49de77.slice - libcontainer container kubepods-besteffort-podfc42e8bc_befe_48fd_a086_67842b49de77.slice. Jan 13 21:15:08.177897 systemd[1]: Created slice kubepods-besteffort-podeba042e6_b417_4f58_b615_4e861e1468fa.slice - libcontainer container kubepods-besteffort-podeba042e6_b417_4f58_b615_4e861e1468fa.slice. Jan 13 21:15:08.184026 systemd[1]: Created slice kubepods-besteffort-pod48748cb0_5484_4170_8079_80e03c4d2ae3.slice - libcontainer container kubepods-besteffort-pod48748cb0_5484_4170_8079_80e03c4d2ae3.slice. Jan 13 21:15:08.224365 containerd[1434]: time="2025-01-13T21:15:08.224139713Z" level=info msg="shim disconnected" id=510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277 namespace=k8s.io Jan 13 21:15:08.224365 containerd[1434]: time="2025-01-13T21:15:08.224205634Z" level=warning msg="cleaning up after shim disconnected" id=510f1762bb9a987887126b3ab7b3f7b2fbc38c7b75fef654e7298fb4cee46277 namespace=k8s.io Jan 13 21:15:08.224365 containerd[1434]: time="2025-01-13T21:15:08.224214234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:15:08.285593 kubelet[2454]: I0113 21:15:08.285499 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z96jk\" (UniqueName: \"kubernetes.io/projected/eba042e6-b417-4f58-b615-4e861e1468fa-kube-api-access-z96jk\") pod \"calico-apiserver-79d978d948-phbxp\" (UID: \"eba042e6-b417-4f58-b615-4e861e1468fa\") " pod="calico-apiserver/calico-apiserver-79d978d948-phbxp" Jan 13 21:15:08.285932 kubelet[2454]: I0113 21:15:08.285748 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/48748cb0-5484-4170-8079-80e03c4d2ae3-calico-apiserver-certs\") pod \"calico-apiserver-79d978d948-2zkm8\" (UID: \"48748cb0-5484-4170-8079-80e03c4d2ae3\") " pod="calico-apiserver/calico-apiserver-79d978d948-2zkm8" Jan 13 21:15:08.285932 kubelet[2454]: I0113 21:15:08.285796 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpgpd\" (UniqueName: \"kubernetes.io/projected/6546b16e-ed32-4e3e-8156-6c685dd971ab-kube-api-access-kpgpd\") pod \"coredns-6f6b679f8f-pcklw\" (UID: \"6546b16e-ed32-4e3e-8156-6c685dd971ab\") " pod="kube-system/coredns-6f6b679f8f-pcklw" Jan 13 21:15:08.285932 kubelet[2454]: I0113 21:15:08.285815 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc42e8bc-befe-48fd-a086-67842b49de77-tigera-ca-bundle\") pod \"calico-kube-controllers-649ff95fb-x7s9h\" (UID: \"fc42e8bc-befe-48fd-a086-67842b49de77\") " pod="calico-system/calico-kube-controllers-649ff95fb-x7s9h" Jan 13 21:15:08.286932 kubelet[2454]: I0113 21:15:08.286798 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6deed8be-443c-4c20-8288-97ef2040b5e5-config-volume\") pod \"coredns-6f6b679f8f-t7c2j\" (UID: \"6deed8be-443c-4c20-8288-97ef2040b5e5\") " pod="kube-system/coredns-6f6b679f8f-t7c2j" Jan 13 21:15:08.286932 kubelet[2454]: I0113 21:15:08.286828 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w42dk\" (UniqueName: \"kubernetes.io/projected/6deed8be-443c-4c20-8288-97ef2040b5e5-kube-api-access-w42dk\") pod \"coredns-6f6b679f8f-t7c2j\" (UID: \"6deed8be-443c-4c20-8288-97ef2040b5e5\") " pod="kube-system/coredns-6f6b679f8f-t7c2j" Jan 13 21:15:08.286932 kubelet[2454]: I0113 21:15:08.286881 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6brl\" (UniqueName: \"kubernetes.io/projected/fc42e8bc-befe-48fd-a086-67842b49de77-kube-api-access-d6brl\") pod \"calico-kube-controllers-649ff95fb-x7s9h\" (UID: \"fc42e8bc-befe-48fd-a086-67842b49de77\") " pod="calico-system/calico-kube-controllers-649ff95fb-x7s9h" Jan 13 21:15:08.286932 kubelet[2454]: I0113 21:15:08.286905 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eba042e6-b417-4f58-b615-4e861e1468fa-calico-apiserver-certs\") pod \"calico-apiserver-79d978d948-phbxp\" (UID: \"eba042e6-b417-4f58-b615-4e861e1468fa\") " pod="calico-apiserver/calico-apiserver-79d978d948-phbxp" Jan 13 21:15:08.287223 kubelet[2454]: I0113 21:15:08.287129 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6546b16e-ed32-4e3e-8156-6c685dd971ab-config-volume\") pod \"coredns-6f6b679f8f-pcklw\" (UID: \"6546b16e-ed32-4e3e-8156-6c685dd971ab\") " pod="kube-system/coredns-6f6b679f8f-pcklw" Jan 13 21:15:08.287223 kubelet[2454]: I0113 21:15:08.287156 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79s5g\" (UniqueName: \"kubernetes.io/projected/48748cb0-5484-4170-8079-80e03c4d2ae3-kube-api-access-79s5g\") pod \"calico-apiserver-79d978d948-2zkm8\" (UID: \"48748cb0-5484-4170-8079-80e03c4d2ae3\") " pod="calico-apiserver/calico-apiserver-79d978d948-2zkm8" Jan 13 21:15:08.436814 kubelet[2454]: E0113 21:15:08.436523 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:08.437868 containerd[1434]: time="2025-01-13T21:15:08.437823963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pcklw,Uid:6546b16e-ed32-4e3e-8156-6c685dd971ab,Namespace:kube-system,Attempt:0,}" Jan 13 21:15:08.451135 kubelet[2454]: E0113 21:15:08.451090 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:08.452137 containerd[1434]: time="2025-01-13T21:15:08.452044694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t7c2j,Uid:6deed8be-443c-4c20-8288-97ef2040b5e5,Namespace:kube-system,Attempt:0,}" Jan 13 21:15:08.480743 containerd[1434]: time="2025-01-13T21:15:08.480447036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649ff95fb-x7s9h,Uid:fc42e8bc-befe-48fd-a086-67842b49de77,Namespace:calico-system,Attempt:0,}" Jan 13 21:15:08.492562 containerd[1434]: time="2025-01-13T21:15:08.489895216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d978d948-2zkm8,Uid:48748cb0-5484-4170-8079-80e03c4d2ae3,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:15:08.492562 containerd[1434]: time="2025-01-13T21:15:08.490064617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d978d948-phbxp,Uid:eba042e6-b417-4f58-b615-4e861e1468fa,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:15:08.786167 containerd[1434]: time="2025-01-13T21:15:08.785870673Z" level=error msg="Failed to destroy network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.786338 containerd[1434]: time="2025-01-13T21:15:08.786263276Z" level=error msg="encountered an error cleaning up failed sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.786338 containerd[1434]: time="2025-01-13T21:15:08.786314436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pcklw,Uid:6546b16e-ed32-4e3e-8156-6c685dd971ab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.786548 containerd[1434]: time="2025-01-13T21:15:08.786490917Z" level=error msg="Failed to destroy network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.786791 containerd[1434]: time="2025-01-13T21:15:08.786766079Z" level=error msg="encountered an error cleaning up failed sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.786835 containerd[1434]: time="2025-01-13T21:15:08.786809599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d978d948-2zkm8,Uid:48748cb0-5484-4170-8079-80e03c4d2ae3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.787732 containerd[1434]: time="2025-01-13T21:15:08.787687965Z" level=error msg="Failed to destroy network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.788384 containerd[1434]: time="2025-01-13T21:15:08.787968047Z" level=error msg="encountered an error cleaning up failed sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.788384 containerd[1434]: time="2025-01-13T21:15:08.788008487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649ff95fb-x7s9h,Uid:fc42e8bc-befe-48fd-a086-67842b49de77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.788483 kubelet[2454]: E0113 21:15:08.788349 2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.788483 kubelet[2454]: E0113 21:15:08.788436 2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pcklw" Jan 13 21:15:08.788483 kubelet[2454]: E0113 21:15:08.788454 2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pcklw" Jan 13 21:15:08.788586 kubelet[2454]: E0113 21:15:08.788492 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pcklw_kube-system(6546b16e-ed32-4e3e-8156-6c685dd971ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pcklw_kube-system(6546b16e-ed32-4e3e-8156-6c685dd971ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pcklw" podUID="6546b16e-ed32-4e3e-8156-6c685dd971ab" Jan 13 21:15:08.788741 kubelet[2454]: E0113 21:15:08.788710 2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.788889 kubelet[2454]: E0113 21:15:08.788870 2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649ff95fb-x7s9h" Jan 13 21:15:08.789101 kubelet[2454]: E0113 21:15:08.788958 2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649ff95fb-x7s9h" Jan 13 21:15:08.789101 kubelet[2454]: E0113 21:15:08.788995 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-649ff95fb-x7s9h_calico-system(fc42e8bc-befe-48fd-a086-67842b49de77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-649ff95fb-x7s9h_calico-system(fc42e8bc-befe-48fd-a086-67842b49de77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-649ff95fb-x7s9h" podUID="fc42e8bc-befe-48fd-a086-67842b49de77" Jan 13 21:15:08.789101 kubelet[2454]: E0113 21:15:08.788345 2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.789219 kubelet[2454]: E0113 21:15:08.789036 2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d978d948-2zkm8" Jan 13 21:15:08.789219 kubelet[2454]: E0113 21:15:08.789050 2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d978d948-2zkm8" Jan 13 21:15:08.789219 kubelet[2454]: E0113 21:15:08.789072 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79d978d948-2zkm8_calico-apiserver(48748cb0-5484-4170-8079-80e03c4d2ae3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79d978d948-2zkm8_calico-apiserver(48748cb0-5484-4170-8079-80e03c4d2ae3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d978d948-2zkm8" podUID="48748cb0-5484-4170-8079-80e03c4d2ae3" Jan 13 21:15:08.793827 containerd[1434]: time="2025-01-13T21:15:08.793794444Z" level=error msg="Failed to destroy network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.794106 containerd[1434]: time="2025-01-13T21:15:08.794078126Z" level=error msg="encountered an error cleaning up failed sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.794144 containerd[1434]: time="2025-01-13T21:15:08.794121086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t7c2j,Uid:6deed8be-443c-4c20-8288-97ef2040b5e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.794729 kubelet[2454]: E0113 21:15:08.794292 2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.794729 kubelet[2454]: E0113 21:15:08.794341 2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t7c2j" Jan 13 21:15:08.794729 kubelet[2454]: E0113 21:15:08.794356 2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t7c2j" Jan 13 21:15:08.794872 kubelet[2454]: E0113 21:15:08.794384 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t7c2j_kube-system(6deed8be-443c-4c20-8288-97ef2040b5e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t7c2j_kube-system(6deed8be-443c-4c20-8288-97ef2040b5e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t7c2j" podUID="6deed8be-443c-4c20-8288-97ef2040b5e5" Jan 13 21:15:08.797111 containerd[1434]: time="2025-01-13T21:15:08.797067505Z" level=error msg="Failed to destroy network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.797367 containerd[1434]: time="2025-01-13T21:15:08.797330027Z" level=error msg="encountered an error cleaning up failed sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.797453 containerd[1434]: time="2025-01-13T21:15:08.797372827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d978d948-phbxp,Uid:eba042e6-b417-4f58-b615-4e861e1468fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.797974 kubelet[2454]: E0113 21:15:08.797937 2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.798042 kubelet[2454]: E0113 21:15:08.797975 2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d978d948-phbxp" Jan 13 21:15:08.798042 kubelet[2454]: E0113 21:15:08.797991 2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d978d948-phbxp" Jan 13 21:15:08.798042 kubelet[2454]: E0113 21:15:08.798020 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79d978d948-phbxp_calico-apiserver(eba042e6-b417-4f58-b615-4e861e1468fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79d978d948-phbxp_calico-apiserver(eba042e6-b417-4f58-b615-4e861e1468fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d978d948-phbxp" podUID="eba042e6-b417-4f58-b615-4e861e1468fa" Jan 13 21:15:08.924088 systemd[1]: Created slice kubepods-besteffort-pod67345a72_9d66_4d9b_8d45_698aed92c23c.slice - libcontainer container kubepods-besteffort-pod67345a72_9d66_4d9b_8d45_698aed92c23c.slice. Jan 13 21:15:08.926327 containerd[1434]: time="2025-01-13T21:15:08.926294813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-st9qh,Uid:67345a72-9d66-4d9b-8d45-698aed92c23c,Namespace:calico-system,Attempt:0,}" Jan 13 21:15:08.972545 containerd[1434]: time="2025-01-13T21:15:08.972490829Z" level=error msg="Failed to destroy network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.972872 containerd[1434]: time="2025-01-13T21:15:08.972824391Z" level=error msg="encountered an error cleaning up failed sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.972924 containerd[1434]: time="2025-01-13T21:15:08.972893552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-st9qh,Uid:67345a72-9d66-4d9b-8d45-698aed92c23c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.974066 kubelet[2454]: E0113 21:15:08.973080 2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:08.974066 kubelet[2454]: E0113 21:15:08.973135 2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-st9qh" Jan 13 21:15:08.974066 kubelet[2454]: E0113 21:15:08.973152 2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-st9qh" Jan 13 21:15:08.974195 kubelet[2454]: E0113 21:15:08.973192 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-st9qh_calico-system(67345a72-9d66-4d9b-8d45-698aed92c23c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-st9qh_calico-system(67345a72-9d66-4d9b-8d45-698aed92c23c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-st9qh" podUID="67345a72-9d66-4d9b-8d45-698aed92c23c" Jan 13 21:15:08.991671 kubelet[2454]: I0113 21:15:08.991652 2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:08.993039 containerd[1434]: time="2025-01-13T21:15:08.992961440Z" level=info msg="StopPodSandbox for \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\"" Jan 13 21:15:08.993275 kubelet[2454]: I0113 21:15:08.993028 2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:08.993661 containerd[1434]: time="2025-01-13T21:15:08.993390123Z" level=info msg="Ensure that sandbox cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8 in task-service has been cleanup successfully" Jan 13 21:15:08.993661 containerd[1434]: time="2025-01-13T21:15:08.993613445Z" level=info msg="StopPodSandbox for \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\"" Jan 13 21:15:08.994241 containerd[1434]: time="2025-01-13T21:15:08.993990727Z" level=info msg="Ensure that sandbox 3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188 in task-service has been cleanup successfully" Jan 13 21:15:08.995621 kubelet[2454]: I0113 21:15:08.995521 2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:08.996978 containerd[1434]: time="2025-01-13T21:15:08.996918426Z" level=info msg="StopPodSandbox for \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\"" Jan 13 21:15:08.997354 containerd[1434]: time="2025-01-13T21:15:08.997227468Z" level=info msg="Ensure that sandbox 57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357 in task-service has been cleanup successfully" Jan 13 21:15:08.997449 kubelet[2454]: I0113 21:15:08.997396 2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:08.998241 containerd[1434]: time="2025-01-13T21:15:08.997876792Z" level=info msg="StopPodSandbox for \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\"" Jan 13 21:15:08.999533 containerd[1434]: time="2025-01-13T21:15:08.998829758Z" level=info msg="Ensure that sandbox 49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1 in task-service has been cleanup successfully" Jan 13 21:15:09.000771 kubelet[2454]: I0113 21:15:09.000475 2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:09.000927 containerd[1434]: time="2025-01-13T21:15:09.000864731Z" level=info msg="StopPodSandbox for \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\"" Jan 13 21:15:09.001093 containerd[1434]: time="2025-01-13T21:15:09.001071772Z" level=info msg="Ensure that sandbox 5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de in task-service has been cleanup successfully" Jan 13 21:15:09.002766 kubelet[2454]: I0113 21:15:09.002639 2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:09.004617 containerd[1434]: time="2025-01-13T21:15:09.003969710Z" level=info msg="StopPodSandbox for \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\"" Jan 13 21:15:09.004617 containerd[1434]: time="2025-01-13T21:15:09.004170431Z" level=info msg="Ensure that sandbox a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156 in task-service has been cleanup successfully" Jan 13 21:15:09.006973 kubelet[2454]: E0113 21:15:09.006931 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:09.008865 containerd[1434]: time="2025-01-13T21:15:09.008824019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:15:09.045796 containerd[1434]: time="2025-01-13T21:15:09.045636320Z" level=error msg="StopPodSandbox for \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\" failed" error="failed to destroy network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:09.045940 kubelet[2454]: E0113 21:15:09.045871 2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:09.045989 kubelet[2454]: E0113 21:15:09.045934 2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8"} Jan 13 21:15:09.046345 kubelet[2454]: E0113 21:15:09.045994 2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67345a72-9d66-4d9b-8d45-698aed92c23c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:15:09.046345 kubelet[2454]: E0113 21:15:09.046016 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67345a72-9d66-4d9b-8d45-698aed92c23c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-st9qh" podUID="67345a72-9d66-4d9b-8d45-698aed92c23c" Jan 13 21:15:09.051060 containerd[1434]: time="2025-01-13T21:15:09.051006312Z" level=error msg="StopPodSandbox for \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\" failed" error="failed to destroy network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:09.051221 kubelet[2454]: E0113 21:15:09.051183 2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:09.051269 kubelet[2454]: E0113 21:15:09.051228 2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188"} Jan 13 21:15:09.051269 kubelet[2454]: E0113 21:15:09.051259 2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48748cb0-5484-4170-8079-80e03c4d2ae3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:15:09.051346 kubelet[2454]: E0113 21:15:09.051279 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48748cb0-5484-4170-8079-80e03c4d2ae3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d978d948-2zkm8" podUID="48748cb0-5484-4170-8079-80e03c4d2ae3" Jan 13 21:15:09.055371 containerd[1434]: time="2025-01-13T21:15:09.055331458Z" level=error msg="StopPodSandbox for \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\" failed" error="failed to destroy network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:09.055572 kubelet[2454]: E0113 21:15:09.055540 2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:09.055633 kubelet[2454]: E0113 21:15:09.055582 2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1"} Jan 13 21:15:09.055633 kubelet[2454]: E0113 21:15:09.055623 2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6546b16e-ed32-4e3e-8156-6c685dd971ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:15:09.055717 kubelet[2454]: E0113 21:15:09.055642 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6546b16e-ed32-4e3e-8156-6c685dd971ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pcklw" podUID="6546b16e-ed32-4e3e-8156-6c685dd971ab" Jan 13 21:15:09.061788 containerd[1434]: time="2025-01-13T21:15:09.061716737Z" level=error msg="StopPodSandbox for \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\" failed" error="failed to destroy network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:09.061984 kubelet[2454]: E0113 21:15:09.061916 2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:09.061984 kubelet[2454]: E0113 21:15:09.061974 2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de"} Jan 13 21:15:09.062561 kubelet[2454]: E0113 21:15:09.062001 2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eba042e6-b417-4f58-b615-4e861e1468fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:15:09.062561 kubelet[2454]: E0113 21:15:09.062023 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eba042e6-b417-4f58-b615-4e861e1468fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d978d948-phbxp" podUID="eba042e6-b417-4f58-b615-4e861e1468fa" Jan 13 21:15:09.063800 containerd[1434]: time="2025-01-13T21:15:09.063763269Z" level=error msg="StopPodSandbox for \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\" failed" error="failed to destroy network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:09.063971 kubelet[2454]: E0113 21:15:09.063927 2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:09.064028 kubelet[2454]: E0113 21:15:09.063975 2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357"} Jan 13 21:15:09.064028 kubelet[2454]: E0113 21:15:09.064005 2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6deed8be-443c-4c20-8288-97ef2040b5e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:15:09.064105 kubelet[2454]: E0113 21:15:09.064024 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6deed8be-443c-4c20-8288-97ef2040b5e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t7c2j" podUID="6deed8be-443c-4c20-8288-97ef2040b5e5" Jan 13 21:15:09.068850 containerd[1434]: time="2025-01-13T21:15:09.068812379Z" level=error msg="StopPodSandbox for \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\" failed" error="failed to destroy network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:15:09.069018 kubelet[2454]: E0113 21:15:09.068991 2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:09.069071 kubelet[2454]: E0113 21:15:09.069026 2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156"} Jan 13 21:15:09.069071 kubelet[2454]: E0113 21:15:09.069053 2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc42e8bc-befe-48fd-a086-67842b49de77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:15:09.069135 kubelet[2454]: E0113 21:15:09.069076 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc42e8bc-befe-48fd-a086-67842b49de77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-649ff95fb-x7s9h" podUID="fc42e8bc-befe-48fd-a086-67842b49de77" Jan 13 21:15:09.489135 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188-shm.mount: Deactivated successfully. Jan 13 21:15:09.489222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de-shm.mount: Deactivated successfully. Jan 13 21:15:09.489272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156-shm.mount: Deactivated successfully. Jan 13 21:15:09.489318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357-shm.mount: Deactivated successfully. Jan 13 21:15:09.489374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1-shm.mount: Deactivated successfully. Jan 13 21:15:13.562035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988118002.mount: Deactivated successfully. Jan 13 21:15:13.664693 containerd[1434]: time="2025-01-13T21:15:13.664317523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:13.665044 containerd[1434]: time="2025-01-13T21:15:13.664818245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 21:15:13.670345 containerd[1434]: time="2025-01-13T21:15:13.670288470Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:13.672540 containerd[1434]: time="2025-01-13T21:15:13.672493401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:13.673134 containerd[1434]: time="2025-01-13T21:15:13.673097764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.664235024s" Jan 13 21:15:13.673134 containerd[1434]: time="2025-01-13T21:15:13.673134044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 21:15:13.687849 containerd[1434]: time="2025-01-13T21:15:13.687801592Z" level=info msg="CreateContainer within sandbox \"1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:15:13.705460 containerd[1434]: time="2025-01-13T21:15:13.705352553Z" level=info msg="CreateContainer within sandbox \"1d6466a1ec1870df86c367e6dd9451406a3059472aa981e6cd8e41c5a2c360cb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"819775162362a8248671885892955d22d8453be9c93528bbf61db5eabf67dd3b\"" Jan 13 21:15:13.705823 containerd[1434]: time="2025-01-13T21:15:13.705797435Z" level=info msg="StartContainer for \"819775162362a8248671885892955d22d8453be9c93528bbf61db5eabf67dd3b\"" Jan 13 21:15:13.765856 systemd[1]: Started cri-containerd-819775162362a8248671885892955d22d8453be9c93528bbf61db5eabf67dd3b.scope - libcontainer container 819775162362a8248671885892955d22d8453be9c93528bbf61db5eabf67dd3b. Jan 13 21:15:13.797144 containerd[1434]: time="2025-01-13T21:15:13.797052939Z" level=info msg="StartContainer for \"819775162362a8248671885892955d22d8453be9c93528bbf61db5eabf67dd3b\" returns successfully" Jan 13 21:15:13.951726 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:15:13.951855 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:15:14.135286 kubelet[2454]: E0113 21:15:14.135241 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:15.139194 kubelet[2454]: I0113 21:15:15.139136 2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:15:15.139660 kubelet[2454]: E0113 21:15:15.139547 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:15.385725 kernel: bpftool[3861]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:15:15.539147 systemd-networkd[1376]: vxlan.calico: Link UP Jan 13 21:15:15.539158 systemd-networkd[1376]: vxlan.calico: Gained carrier Jan 13 21:15:16.326768 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:52042.service - OpenSSH per-connection server daemon (10.0.0.1:52042). Jan 13 21:15:16.372317 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 52042 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:16.373910 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:16.377491 systemd-logind[1418]: New session 8 of user core. Jan 13 21:15:16.385843 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:15:16.541558 sshd[3938]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:16.545077 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:15:16.545530 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:52042.service: Deactivated successfully. Jan 13 21:15:16.548117 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:15:16.548953 systemd-logind[1418]: Removed session 8. Jan 13 21:15:17.004849 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jan 13 21:15:19.918341 containerd[1434]: time="2025-01-13T21:15:19.918261897Z" level=info msg="StopPodSandbox for \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\"" Jan 13 21:15:19.918744 containerd[1434]: time="2025-01-13T21:15:19.918381458Z" level=info msg="StopPodSandbox for \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\"" Jan 13 21:15:19.919170 containerd[1434]: time="2025-01-13T21:15:19.918869699Z" level=info msg="StopPodSandbox for \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\"" Jan 13 21:15:20.051128 kubelet[2454]: I0113 21:15:20.050926 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ddtzj" podStartSLOduration=7.166599534 podStartE2EDuration="23.050908265s" podCreationTimestamp="2025-01-13 21:14:57 +0000 UTC" firstStartedPulling="2025-01-13 21:14:57.789605476 +0000 UTC m=+12.966683058" lastFinishedPulling="2025-01-13 21:15:13.673913967 +0000 UTC m=+28.850991789" observedRunningTime="2025-01-13 21:15:14.14913041 +0000 UTC m=+29.326207992" watchObservedRunningTime="2025-01-13 21:15:20.050908265 +0000 UTC m=+35.227985847" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.053 [INFO][4000] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.053 [INFO][4000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" iface="eth0" netns="/var/run/netns/cni-59d6b92c-e665-14dc-7328-d44abfc57ec4" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.053 [INFO][4000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" iface="eth0" netns="/var/run/netns/cni-59d6b92c-e665-14dc-7328-d44abfc57ec4" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.054 [INFO][4000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" iface="eth0" netns="/var/run/netns/cni-59d6b92c-e665-14dc-7328-d44abfc57ec4" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.054 [INFO][4000] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.054 [INFO][4000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.176 [INFO][4031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.177 [INFO][4031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.177 [INFO][4031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.191 [WARNING][4031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.191 [INFO][4031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.193 [INFO][4031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:20.198206 containerd[1434]: 2025-01-13 21:15:20.196 [INFO][4000] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:20.199492 containerd[1434]: time="2025-01-13T21:15:20.198907983Z" level=info msg="TearDown network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\" successfully" Jan 13 21:15:20.199492 containerd[1434]: time="2025-01-13T21:15:20.198938623Z" level=info msg="StopPodSandbox for \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\" returns successfully" Jan 13 21:15:20.201946 containerd[1434]: time="2025-01-13T21:15:20.201912672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-st9qh,Uid:67345a72-9d66-4d9b-8d45-698aed92c23c,Namespace:calico-system,Attempt:1,}" Jan 13 21:15:20.201954 systemd[1]: run-netns-cni\x2d59d6b92c\x2de665\x2d14dc\x2d7328\x2dd44abfc57ec4.mount: Deactivated successfully. Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.050 [INFO][4004] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.050 [INFO][4004] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" iface="eth0" netns="/var/run/netns/cni-1cdc208b-46a4-d68f-8099-a31af01347cd" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.050 [INFO][4004] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" iface="eth0" netns="/var/run/netns/cni-1cdc208b-46a4-d68f-8099-a31af01347cd" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.051 [INFO][4004] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" iface="eth0" netns="/var/run/netns/cni-1cdc208b-46a4-d68f-8099-a31af01347cd" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.051 [INFO][4004] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.051 [INFO][4004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.176 [INFO][4030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.177 [INFO][4030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.193 [INFO][4030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.203 [WARNING][4030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.203 [INFO][4030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.205 [INFO][4030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:20.210466 containerd[1434]: 2025-01-13 21:15:20.209 [INFO][4004] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:20.211357 containerd[1434]: time="2025-01-13T21:15:20.210671737Z" level=info msg="TearDown network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\" successfully" Jan 13 21:15:20.211357 containerd[1434]: time="2025-01-13T21:15:20.210714418Z" level=info msg="StopPodSandbox for \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\" returns successfully" Jan 13 21:15:20.211423 containerd[1434]: time="2025-01-13T21:15:20.211387380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d978d948-phbxp,Uid:eba042e6-b417-4f58-b615-4e861e1468fa,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:15:20.214362 systemd[1]: run-netns-cni\x2d1cdc208b\x2d46a4\x2dd68f\x2d8099\x2da31af01347cd.mount: Deactivated successfully. Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.048 [INFO][4005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.049 [INFO][4005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" iface="eth0" netns="/var/run/netns/cni-dbd77bc6-89ec-c3d4-3b06-9a9e05763f77" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.050 [INFO][4005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" iface="eth0" netns="/var/run/netns/cni-dbd77bc6-89ec-c3d4-3b06-9a9e05763f77" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.051 [INFO][4005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" iface="eth0" netns="/var/run/netns/cni-dbd77bc6-89ec-c3d4-3b06-9a9e05763f77" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.052 [INFO][4005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.052 [INFO][4005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.181 [INFO][4029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.182 [INFO][4029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.205 [INFO][4029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.214 [WARNING][4029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.215 [INFO][4029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.216 [INFO][4029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:20.219789 containerd[1434]: 2025-01-13 21:15:20.218 [INFO][4005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:20.220134 containerd[1434]: time="2025-01-13T21:15:20.219914245Z" level=info msg="TearDown network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\" successfully" Jan 13 21:15:20.220134 containerd[1434]: time="2025-01-13T21:15:20.219941965Z" level=info msg="StopPodSandbox for \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\" returns successfully" Jan 13 21:15:20.221602 kubelet[2454]: E0113 21:15:20.220376 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:20.221762 systemd[1]: run-netns-cni\x2ddbd77bc6\x2d89ec\x2dc3d4\x2d3b06\x2d9a9e05763f77.mount: Deactivated successfully. Jan 13 21:15:20.222244 containerd[1434]: time="2025-01-13T21:15:20.221944731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pcklw,Uid:6546b16e-ed32-4e3e-8156-6c685dd971ab,Namespace:kube-system,Attempt:1,}" Jan 13 21:15:20.443150 systemd-networkd[1376]: cali22cd610d14b: Link UP Jan 13 21:15:20.443354 systemd-networkd[1376]: cali22cd610d14b: Gained carrier Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.331 [INFO][4052] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--pcklw-eth0 coredns-6f6b679f8f- kube-system 6546b16e-ed32-4e3e-8156-6c685dd971ab 842 0 2025-01-13 21:14:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-pcklw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22cd610d14b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Namespace="kube-system" Pod="coredns-6f6b679f8f-pcklw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pcklw-" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.331 [INFO][4052] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Namespace="kube-system" Pod="coredns-6f6b679f8f-pcklw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.384 [INFO][4096] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" HandleID="k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.405 [INFO][4096] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" HandleID="k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004a1690), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-pcklw", "timestamp":"2025-01-13 21:15:20.384550611 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.405 [INFO][4096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.405 [INFO][4096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.405 [INFO][4096] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.414 [INFO][4096] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.419 [INFO][4096] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.424 [INFO][4096] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.425 [INFO][4096] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.427 [INFO][4096] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.427 [INFO][4096] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.429 [INFO][4096] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25 Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.433 [INFO][4096] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.437 [INFO][4096] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.437 [INFO][4096] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" host="localhost" Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.437 [INFO][4096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:20.460028 containerd[1434]: 2025-01-13 21:15:20.438 [INFO][4096] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" HandleID="k8s-pod-network.cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.460547 containerd[1434]: 2025-01-13 21:15:20.439 [INFO][4052] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Namespace="kube-system" Pod="coredns-6f6b679f8f-pcklw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--pcklw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6546b16e-ed32-4e3e-8156-6c685dd971ab", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-pcklw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22cd610d14b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:20.460547 containerd[1434]: 2025-01-13 21:15:20.440 [INFO][4052] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Namespace="kube-system" Pod="coredns-6f6b679f8f-pcklw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.460547 containerd[1434]: 2025-01-13 21:15:20.440 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22cd610d14b ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Namespace="kube-system" Pod="coredns-6f6b679f8f-pcklw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.460547 containerd[1434]: 2025-01-13 21:15:20.442 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Namespace="kube-system" Pod="coredns-6f6b679f8f-pcklw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.460547 containerd[1434]: 2025-01-13 21:15:20.442 [INFO][4052] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Namespace="kube-system" Pod="coredns-6f6b679f8f-pcklw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--pcklw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6546b16e-ed32-4e3e-8156-6c685dd971ab", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25", Pod:"coredns-6f6b679f8f-pcklw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22cd610d14b", MAC:"4a:a6:a7:a7:2b:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:20.460547 containerd[1434]: 2025-01-13 21:15:20.452 [INFO][4052] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25" Namespace="kube-system" Pod="coredns-6f6b679f8f-pcklw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:20.486252 containerd[1434]: time="2025-01-13T21:15:20.485841190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:15:20.486252 containerd[1434]: time="2025-01-13T21:15:20.486213991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:15:20.486252 containerd[1434]: time="2025-01-13T21:15:20.486227511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:20.486423 containerd[1434]: time="2025-01-13T21:15:20.486310792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:20.509750 systemd[1]: Started cri-containerd-cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25.scope - libcontainer container cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25. Jan 13 21:15:20.524229 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:15:20.553369 systemd-networkd[1376]: cali8bc608c2867: Link UP Jan 13 21:15:20.553760 systemd-networkd[1376]: cali8bc608c2867: Gained carrier Jan 13 21:15:20.559367 containerd[1434]: time="2025-01-13T21:15:20.559312687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pcklw,Uid:6546b16e-ed32-4e3e-8156-6c685dd971ab,Namespace:kube-system,Attempt:1,} returns sandbox id \"cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25\"" Jan 13 21:15:20.562073 kubelet[2454]: E0113 21:15:20.562003 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:20.564223 containerd[1434]: time="2025-01-13T21:15:20.564114302Z" level=info msg="CreateContainer within sandbox \"cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.341 [INFO][4058] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0 calico-apiserver-79d978d948- calico-apiserver eba042e6-b417-4f58-b615-4e861e1468fa 843 0 2025-01-13 21:14:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d978d948 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79d978d948-phbxp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8bc608c2867 [] []}} ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-phbxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--phbxp-" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.343 [INFO][4058] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-phbxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.375 [INFO][4103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" HandleID="k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.406 [INFO][4103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" HandleID="k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000521680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79d978d948-phbxp", "timestamp":"2025-01-13 21:15:20.375037543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.406 [INFO][4103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.438 [INFO][4103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.438 [INFO][4103] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.515 [INFO][4103] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.522 [INFO][4103] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.528 [INFO][4103] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.530 [INFO][4103] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.533 [INFO][4103] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.533 [INFO][4103] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.535 [INFO][4103] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937 Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.540 [INFO][4103] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.547 [INFO][4103] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.547 [INFO][4103] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" host="localhost" Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.547 [INFO][4103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:20.570597 containerd[1434]: 2025-01-13 21:15:20.547 [INFO][4103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" HandleID="k8s-pod-network.8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.571406 containerd[1434]: 2025-01-13 21:15:20.550 [INFO][4058] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-phbxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0", GenerateName:"calico-apiserver-79d978d948-", Namespace:"calico-apiserver", SelfLink:"", UID:"eba042e6-b417-4f58-b615-4e861e1468fa", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d978d948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79d978d948-phbxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bc608c2867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:20.571406 containerd[1434]: 2025-01-13 21:15:20.550 [INFO][4058] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-phbxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.571406 containerd[1434]: 2025-01-13 21:15:20.550 [INFO][4058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bc608c2867 ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-phbxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.571406 containerd[1434]: 2025-01-13 21:15:20.554 [INFO][4058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-phbxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.571406 containerd[1434]: 2025-01-13 21:15:20.554 [INFO][4058] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-phbxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0", GenerateName:"calico-apiserver-79d978d948-", Namespace:"calico-apiserver", SelfLink:"", UID:"eba042e6-b417-4f58-b615-4e861e1468fa", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d978d948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937", Pod:"calico-apiserver-79d978d948-phbxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bc608c2867", MAC:"56:0a:b8:c3:37:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:20.571406 containerd[1434]: 2025-01-13 21:15:20.565 [INFO][4058] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-phbxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:20.589723 containerd[1434]: time="2025-01-13T21:15:20.589650257Z" level=info msg="CreateContainer within sandbox \"cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"822796e9954243b04971c7711458869984fd5521a4ba7606726ce421ebae4e3d\"" Jan 13 21:15:20.591157 containerd[1434]: time="2025-01-13T21:15:20.590951981Z" level=info msg="StartContainer for \"822796e9954243b04971c7711458869984fd5521a4ba7606726ce421ebae4e3d\"" Jan 13 21:15:20.619785 systemd[1]: Started cri-containerd-822796e9954243b04971c7711458869984fd5521a4ba7606726ce421ebae4e3d.scope - libcontainer container 822796e9954243b04971c7711458869984fd5521a4ba7606726ce421ebae4e3d. Jan 13 21:15:20.622613 containerd[1434]: time="2025-01-13T21:15:20.621877512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:15:20.622613 containerd[1434]: time="2025-01-13T21:15:20.622212593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:15:20.622613 containerd[1434]: time="2025-01-13T21:15:20.622259113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:20.622613 containerd[1434]: time="2025-01-13T21:15:20.622422154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:20.639893 systemd[1]: Started cri-containerd-8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937.scope - libcontainer container 8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937. Jan 13 21:15:20.662174 systemd-networkd[1376]: calib345dfa7a21: Link UP Jan 13 21:15:20.662784 systemd-networkd[1376]: calib345dfa7a21: Gained carrier Jan 13 21:15:20.664641 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:15:20.668170 containerd[1434]: time="2025-01-13T21:15:20.668113209Z" level=info msg="StartContainer for \"822796e9954243b04971c7711458869984fd5521a4ba7606726ce421ebae4e3d\" returns successfully" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.342 [INFO][4056] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--st9qh-eth0 csi-node-driver- calico-system 67345a72-9d66-4d9b-8d45-698aed92c23c 844 0 2025-01-13 21:14:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-st9qh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib345dfa7a21 [] []}} ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Namespace="calico-system" Pod="csi-node-driver-st9qh" WorkloadEndpoint="localhost-k8s-csi--node--driver--st9qh-" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.343 [INFO][4056] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Namespace="calico-system" Pod="csi-node-driver-st9qh" WorkloadEndpoint="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.406 [INFO][4101] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" HandleID="k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.417 [INFO][4101] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" HandleID="k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002955d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-st9qh", "timestamp":"2025-01-13 21:15:20.406227595 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.417 [INFO][4101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.547 [INFO][4101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.547 [INFO][4101] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.616 [INFO][4101] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.625 [INFO][4101] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.633 [INFO][4101] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.636 [INFO][4101] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.639 [INFO][4101] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.639 [INFO][4101] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.640 [INFO][4101] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993 Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.645 [INFO][4101] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.655 [INFO][4101] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.655 [INFO][4101] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" host="localhost" Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.655 [INFO][4101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:20.695856 containerd[1434]: 2025-01-13 21:15:20.655 [INFO][4101] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" HandleID="k8s-pod-network.0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.696463 containerd[1434]: 2025-01-13 21:15:20.659 [INFO][4056] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Namespace="calico-system" Pod="csi-node-driver-st9qh" WorkloadEndpoint="localhost-k8s-csi--node--driver--st9qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--st9qh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67345a72-9d66-4d9b-8d45-698aed92c23c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-st9qh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib345dfa7a21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:20.696463 containerd[1434]: 2025-01-13 21:15:20.659 [INFO][4056] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Namespace="calico-system" Pod="csi-node-driver-st9qh" WorkloadEndpoint="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.696463 containerd[1434]: 2025-01-13 21:15:20.659 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib345dfa7a21 ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Namespace="calico-system" Pod="csi-node-driver-st9qh" WorkloadEndpoint="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.696463 containerd[1434]: 2025-01-13 21:15:20.664 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Namespace="calico-system" Pod="csi-node-driver-st9qh" WorkloadEndpoint="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.696463 containerd[1434]: 2025-01-13 21:15:20.666 [INFO][4056] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Namespace="calico-system" Pod="csi-node-driver-st9qh" WorkloadEndpoint="localhost-k8s-csi--node--driver--st9qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--st9qh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67345a72-9d66-4d9b-8d45-698aed92c23c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993", Pod:"csi-node-driver-st9qh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib345dfa7a21", MAC:"32:e2:02:c3:d6:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:20.696463 containerd[1434]: 2025-01-13 21:15:20.687 [INFO][4056] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993" Namespace="calico-system" Pod="csi-node-driver-st9qh" WorkloadEndpoint="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:20.703775 containerd[1434]: time="2025-01-13T21:15:20.703348473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d978d948-phbxp,Uid:eba042e6-b417-4f58-b615-4e861e1468fa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937\"" Jan 13 21:15:20.706559 containerd[1434]: time="2025-01-13T21:15:20.706442322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:15:20.726898 containerd[1434]: time="2025-01-13T21:15:20.721530007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:15:20.726898 containerd[1434]: time="2025-01-13T21:15:20.721937288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:15:20.726898 containerd[1434]: time="2025-01-13T21:15:20.721981128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:20.726898 containerd[1434]: time="2025-01-13T21:15:20.722135728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:20.743866 systemd[1]: Started cri-containerd-0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993.scope - libcontainer container 0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993. Jan 13 21:15:20.757820 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:15:20.783477 containerd[1434]: time="2025-01-13T21:15:20.782766027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-st9qh,Uid:67345a72-9d66-4d9b-8d45-698aed92c23c,Namespace:calico-system,Attempt:1,} returns sandbox id \"0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993\"" Jan 13 21:15:20.918734 containerd[1434]: time="2025-01-13T21:15:20.918579949Z" level=info msg="StopPodSandbox for \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\"" Jan 13 21:15:20.920068 containerd[1434]: time="2025-01-13T21:15:20.918673109Z" level=info msg="StopPodSandbox for \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\"" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:20.970 [INFO][4366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:20.970 [INFO][4366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" iface="eth0" netns="/var/run/netns/cni-0a93d9fd-76ec-ba0a-8c6c-8c0f04831b06" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:20.971 [INFO][4366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" iface="eth0" netns="/var/run/netns/cni-0a93d9fd-76ec-ba0a-8c6c-8c0f04831b06" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:20.971 [INFO][4366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" iface="eth0" netns="/var/run/netns/cni-0a93d9fd-76ec-ba0a-8c6c-8c0f04831b06" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:20.971 [INFO][4366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:20.971 [INFO][4366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:21.000 [INFO][4381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:21.001 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:21.001 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:21.011 [WARNING][4381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:21.011 [INFO][4381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:21.014 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:21.020326 containerd[1434]: 2025-01-13 21:15:21.016 [INFO][4366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:20.981 [INFO][4371] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:20.981 [INFO][4371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" iface="eth0" netns="/var/run/netns/cni-fd80b7e4-17a5-eefb-8329-1ca5b27b5d03" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:20.982 [INFO][4371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" iface="eth0" netns="/var/run/netns/cni-fd80b7e4-17a5-eefb-8329-1ca5b27b5d03" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:20.982 [INFO][4371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" iface="eth0" netns="/var/run/netns/cni-fd80b7e4-17a5-eefb-8329-1ca5b27b5d03" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:20.982 [INFO][4371] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:20.982 [INFO][4371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:21.014 [INFO][4387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:21.014 [INFO][4387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:21.015 [INFO][4387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:21.024 [WARNING][4387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:21.024 [INFO][4387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:21.026 [INFO][4387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:21.030457 containerd[1434]: 2025-01-13 21:15:21.028 [INFO][4371] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:21.033064 containerd[1434]: time="2025-01-13T21:15:21.019992645Z" level=info msg="TearDown network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\" successfully" Jan 13 21:15:21.033064 containerd[1434]: time="2025-01-13T21:15:21.033060401Z" level=info msg="StopPodSandbox for \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\" returns successfully" Jan 13 21:15:21.033173 containerd[1434]: time="2025-01-13T21:15:21.030456274Z" level=info msg="TearDown network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\" successfully" Jan 13 21:15:21.033201 containerd[1434]: time="2025-01-13T21:15:21.033175321Z" level=info msg="StopPodSandbox for \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\" returns successfully" Jan 13 21:15:21.034234 containerd[1434]: time="2025-01-13T21:15:21.033736403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d978d948-2zkm8,Uid:48748cb0-5484-4170-8079-80e03c4d2ae3,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:15:21.034234 containerd[1434]: time="2025-01-13T21:15:21.033760963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649ff95fb-x7s9h,Uid:fc42e8bc-befe-48fd-a086-67842b49de77,Namespace:calico-system,Attempt:1,}" Jan 13 21:15:21.156385 kubelet[2454]: E0113 21:15:21.156349 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:21.167675 kubelet[2454]: I0113 21:15:21.167610 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pcklw" podStartSLOduration=31.167595413 podStartE2EDuration="31.167595413s" podCreationTimestamp="2025-01-13 21:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:15:21.166794171 +0000 UTC m=+36.343871753" watchObservedRunningTime="2025-01-13 21:15:21.167595413 +0000 UTC m=+36.344672995" Jan 13 21:15:21.177897 systemd-networkd[1376]: calid2eebf9800c: Link UP Jan 13 21:15:21.179269 systemd-networkd[1376]: calid2eebf9800c: Gained carrier Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.087 [INFO][4398] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0 calico-apiserver-79d978d948- calico-apiserver 48748cb0-5484-4170-8079-80e03c4d2ae3 871 0 2025-01-13 21:14:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d978d948 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79d978d948-2zkm8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid2eebf9800c [] []}} ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-2zkm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--2zkm8-" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.088 [INFO][4398] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-2zkm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.121 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" HandleID="k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.135 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" HandleID="k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a8550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79d978d948-2zkm8", "timestamp":"2025-01-13 21:15:21.121362805 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.136 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.136 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.136 [INFO][4425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.138 [INFO][4425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.142 [INFO][4425] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.148 [INFO][4425] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.150 [INFO][4425] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.153 [INFO][4425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.153 [INFO][4425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.155 [INFO][4425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.162 [INFO][4425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.169 [INFO][4425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.169 [INFO][4425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" host="localhost" Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.169 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:21.208587 containerd[1434]: 2025-01-13 21:15:21.169 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" HandleID="k8s-pod-network.f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.209149 containerd[1434]: 2025-01-13 21:15:21.173 [INFO][4398] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-2zkm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0", GenerateName:"calico-apiserver-79d978d948-", Namespace:"calico-apiserver", SelfLink:"", UID:"48748cb0-5484-4170-8079-80e03c4d2ae3", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d978d948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79d978d948-2zkm8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2eebf9800c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:21.209149 containerd[1434]: 2025-01-13 21:15:21.173 [INFO][4398] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-2zkm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.209149 containerd[1434]: 2025-01-13 21:15:21.173 [INFO][4398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2eebf9800c ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-2zkm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.209149 containerd[1434]: 2025-01-13 21:15:21.178 [INFO][4398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-2zkm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.209149 containerd[1434]: 2025-01-13 21:15:21.187 [INFO][4398] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-2zkm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0", GenerateName:"calico-apiserver-79d978d948-", Namespace:"calico-apiserver", SelfLink:"", UID:"48748cb0-5484-4170-8079-80e03c4d2ae3", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d978d948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f", Pod:"calico-apiserver-79d978d948-2zkm8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2eebf9800c", MAC:"7e:c9:f2:fc:9f:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:21.209149 containerd[1434]: 2025-01-13 21:15:21.204 [INFO][4398] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f" Namespace="calico-apiserver" Pod="calico-apiserver-79d978d948-2zkm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:21.209437 systemd[1]: run-netns-cni\x2d0a93d9fd\x2d76ec\x2dba0a\x2d8c6c\x2d8c0f04831b06.mount: Deactivated successfully. Jan 13 21:15:21.209526 systemd[1]: run-netns-cni\x2dfd80b7e4\x2d17a5\x2deefb\x2d8329\x2d1ca5b27b5d03.mount: Deactivated successfully. Jan 13 21:15:21.232792 containerd[1434]: time="2025-01-13T21:15:21.230677428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:15:21.232792 containerd[1434]: time="2025-01-13T21:15:21.230765788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:15:21.232792 containerd[1434]: time="2025-01-13T21:15:21.230781348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:21.232792 containerd[1434]: time="2025-01-13T21:15:21.231468190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:21.269897 systemd[1]: Started cri-containerd-f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f.scope - libcontainer container f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f. Jan 13 21:15:21.277215 systemd-networkd[1376]: calia9b7810c71a: Link UP Jan 13 21:15:21.280246 systemd-networkd[1376]: calia9b7810c71a: Gained carrier Jan 13 21:15:21.287917 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.097 [INFO][4410] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0 calico-kube-controllers-649ff95fb- calico-system fc42e8bc-befe-48fd-a086-67842b49de77 872 0 2025-01-13 21:14:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:649ff95fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-649ff95fb-x7s9h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia9b7810c71a [] []}} ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Namespace="calico-system" Pod="calico-kube-controllers-649ff95fb-x7s9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.097 [INFO][4410] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Namespace="calico-system" Pod="calico-kube-controllers-649ff95fb-x7s9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.126 [INFO][4431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" HandleID="k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.138 [INFO][4431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" HandleID="k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-649ff95fb-x7s9h", "timestamp":"2025-01-13 21:15:21.12667142 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.139 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.169 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.170 [INFO][4431] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.239 [INFO][4431] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.247 [INFO][4431] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.252 [INFO][4431] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.254 [INFO][4431] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.256 [INFO][4431] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.257 [INFO][4431] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.259 [INFO][4431] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245 Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.264 [INFO][4431] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.271 [INFO][4431] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.271 [INFO][4431] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" host="localhost" Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.271 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:21.298760 containerd[1434]: 2025-01-13 21:15:21.271 [INFO][4431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" HandleID="k8s-pod-network.27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.299309 containerd[1434]: 2025-01-13 21:15:21.273 [INFO][4410] cni-plugin/k8s.go 386: Populated endpoint ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Namespace="calico-system" Pod="calico-kube-controllers-649ff95fb-x7s9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0", GenerateName:"calico-kube-controllers-649ff95fb-", Namespace:"calico-system", SelfLink:"", UID:"fc42e8bc-befe-48fd-a086-67842b49de77", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649ff95fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-649ff95fb-x7s9h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9b7810c71a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:21.299309 containerd[1434]: 2025-01-13 21:15:21.274 [INFO][4410] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Namespace="calico-system" Pod="calico-kube-controllers-649ff95fb-x7s9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.299309 containerd[1434]: 2025-01-13 21:15:21.274 [INFO][4410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9b7810c71a ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Namespace="calico-system" Pod="calico-kube-controllers-649ff95fb-x7s9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.299309 containerd[1434]: 2025-01-13 21:15:21.278 [INFO][4410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Namespace="calico-system" Pod="calico-kube-controllers-649ff95fb-x7s9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.299309 containerd[1434]: 2025-01-13 21:15:21.278 [INFO][4410] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Namespace="calico-system" Pod="calico-kube-controllers-649ff95fb-x7s9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0", GenerateName:"calico-kube-controllers-649ff95fb-", Namespace:"calico-system", SelfLink:"", UID:"fc42e8bc-befe-48fd-a086-67842b49de77", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649ff95fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245", Pod:"calico-kube-controllers-649ff95fb-x7s9h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9b7810c71a", MAC:"96:bd:5a:b2:85:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:21.299309 containerd[1434]: 2025-01-13 21:15:21.293 [INFO][4410] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245" Namespace="calico-system" Pod="calico-kube-controllers-649ff95fb-x7s9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:21.313250 containerd[1434]: time="2025-01-13T21:15:21.312829256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d978d948-2zkm8,Uid:48748cb0-5484-4170-8079-80e03c4d2ae3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f\"" Jan 13 21:15:21.320798 containerd[1434]: time="2025-01-13T21:15:21.320677637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:15:21.320798 containerd[1434]: time="2025-01-13T21:15:21.320774598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:15:21.320798 containerd[1434]: time="2025-01-13T21:15:21.320789358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:21.321030 containerd[1434]: time="2025-01-13T21:15:21.320870278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:21.338199 systemd[1]: Started cri-containerd-27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245.scope - libcontainer container 27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245. Jan 13 21:15:21.350039 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:15:21.367094 containerd[1434]: time="2025-01-13T21:15:21.367053086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649ff95fb-x7s9h,Uid:fc42e8bc-befe-48fd-a086-67842b49de77,Namespace:calico-system,Attempt:1,} returns sandbox id \"27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245\"" Jan 13 21:15:21.553601 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:52048.service - OpenSSH per-connection server daemon (10.0.0.1:52048). Jan 13 21:15:21.601766 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 52048 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:21.603572 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:21.608633 systemd-logind[1418]: New session 9 of user core. Jan 13 21:15:21.624581 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:15:21.740839 systemd-networkd[1376]: calib345dfa7a21: Gained IPv6LL Jan 13 21:15:21.817665 sshd[4560]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:21.820809 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:52048.service: Deactivated successfully. Jan 13 21:15:21.822774 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:15:21.825152 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:15:21.826119 systemd-logind[1418]: Removed session 9. Jan 13 21:15:21.918939 containerd[1434]: time="2025-01-13T21:15:21.917757291Z" level=info msg="StopPodSandbox for \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\"" Jan 13 21:15:21.933893 systemd-networkd[1376]: cali8bc608c2867: Gained IPv6LL Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.964 [INFO][4592] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.964 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" iface="eth0" netns="/var/run/netns/cni-9a624e07-d9de-b815-337c-47027a1b633c" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.964 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" iface="eth0" netns="/var/run/netns/cni-9a624e07-d9de-b815-337c-47027a1b633c" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.964 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" iface="eth0" netns="/var/run/netns/cni-9a624e07-d9de-b815-337c-47027a1b633c" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.964 [INFO][4592] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.964 [INFO][4592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.984 [INFO][4600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.984 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.984 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.994 [WARNING][4600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.994 [INFO][4600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.995 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:21.999742 containerd[1434]: 2025-01-13 21:15:21.997 [INFO][4592] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:22.001142 containerd[1434]: time="2025-01-13T21:15:21.999825078Z" level=info msg="TearDown network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\" successfully" Jan 13 21:15:22.001142 containerd[1434]: time="2025-01-13T21:15:21.999853838Z" level=info msg="StopPodSandbox for \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\" returns successfully" Jan 13 21:15:22.001142 containerd[1434]: time="2025-01-13T21:15:22.000484800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t7c2j,Uid:6deed8be-443c-4c20-8288-97ef2040b5e5,Namespace:kube-system,Attempt:1,}" Jan 13 21:15:22.001237 kubelet[2454]: E0113 21:15:22.000154 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:22.150652 systemd-networkd[1376]: cali68a008584c4: Link UP Jan 13 21:15:22.151156 systemd-networkd[1376]: cali68a008584c4: Gained carrier Jan 13 21:15:22.168885 kubelet[2454]: E0113 21:15:22.166860 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.043 [INFO][4607] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0 coredns-6f6b679f8f- kube-system 6deed8be-443c-4c20-8288-97ef2040b5e5 901 0 2025-01-13 21:14:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-t7c2j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68a008584c4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7c2j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7c2j-" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.043 [INFO][4607] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7c2j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.075 [INFO][4620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" HandleID="k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.097 [INFO][4620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" HandleID="k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aad90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-t7c2j", "timestamp":"2025-01-13 21:15:22.075559755 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.097 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.097 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.098 [INFO][4620] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.104 [INFO][4620] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.112 [INFO][4620] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.119 [INFO][4620] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.121 [INFO][4620] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.124 [INFO][4620] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.124 [INFO][4620] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.126 [INFO][4620] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.131 [INFO][4620] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.140 [INFO][4620] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.141 [INFO][4620] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" host="localhost" Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.141 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:22.174592 containerd[1434]: 2025-01-13 21:15:22.141 [INFO][4620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" HandleID="k8s-pod-network.2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:22.175177 containerd[1434]: 2025-01-13 21:15:22.148 [INFO][4607] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7c2j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6deed8be-443c-4c20-8288-97ef2040b5e5", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-t7c2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68a008584c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:22.175177 containerd[1434]: 2025-01-13 21:15:22.148 [INFO][4607] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7c2j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:22.175177 containerd[1434]: 2025-01-13 21:15:22.148 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68a008584c4 ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7c2j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:22.175177 containerd[1434]: 2025-01-13 21:15:22.150 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7c2j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:22.175177 containerd[1434]: 2025-01-13 21:15:22.151 [INFO][4607] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7c2j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6deed8be-443c-4c20-8288-97ef2040b5e5", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a", Pod:"coredns-6f6b679f8f-t7c2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68a008584c4", MAC:"de:79:2c:b0:2e:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:22.175177 containerd[1434]: 2025-01-13 21:15:22.169 [INFO][4607] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t7c2j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:22.209320 systemd[1]: run-netns-cni\x2d9a624e07\x2dd9de\x2db815\x2d337c\x2d47027a1b633c.mount: Deactivated successfully. Jan 13 21:15:22.212133 containerd[1434]: time="2025-01-13T21:15:22.212013389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:15:22.212133 containerd[1434]: time="2025-01-13T21:15:22.212091270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:15:22.212133 containerd[1434]: time="2025-01-13T21:15:22.212103670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:22.212337 containerd[1434]: time="2025-01-13T21:15:22.212252590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:15:22.239933 systemd[1]: Started cri-containerd-2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a.scope - libcontainer container 2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a. Jan 13 21:15:22.252692 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:15:22.276774 containerd[1434]: time="2025-01-13T21:15:22.276735117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t7c2j,Uid:6deed8be-443c-4c20-8288-97ef2040b5e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a\"" Jan 13 21:15:22.277560 kubelet[2454]: E0113 21:15:22.277535 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:22.294655 containerd[1434]: time="2025-01-13T21:15:22.294608284Z" level=info msg="CreateContainer within sandbox \"2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:15:22.317103 systemd-networkd[1376]: cali22cd610d14b: Gained IPv6LL Jan 13 21:15:22.362303 containerd[1434]: time="2025-01-13T21:15:22.362246739Z" level=info msg="CreateContainer within sandbox \"2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d9a63d86b681863004aa4af05a9c2bcd1b04519161b5656e38bb97a058d1bc4\"" Jan 13 21:15:22.362899 containerd[1434]: time="2025-01-13T21:15:22.362779621Z" level=info msg="StartContainer for \"9d9a63d86b681863004aa4af05a9c2bcd1b04519161b5656e38bb97a058d1bc4\"" Jan 13 21:15:22.395015 systemd[1]: Started cri-containerd-9d9a63d86b681863004aa4af05a9c2bcd1b04519161b5656e38bb97a058d1bc4.scope - libcontainer container 9d9a63d86b681863004aa4af05a9c2bcd1b04519161b5656e38bb97a058d1bc4. Jan 13 21:15:22.426231 containerd[1434]: time="2025-01-13T21:15:22.426116305Z" level=info msg="StartContainer for \"9d9a63d86b681863004aa4af05a9c2bcd1b04519161b5656e38bb97a058d1bc4\" returns successfully" Jan 13 21:15:22.764838 systemd-networkd[1376]: calid2eebf9800c: Gained IPv6LL Jan 13 21:15:22.837417 containerd[1434]: time="2025-01-13T21:15:22.837136292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:22.839455 containerd[1434]: time="2025-01-13T21:15:22.839415458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 13 21:15:22.840325 containerd[1434]: time="2025-01-13T21:15:22.840272661Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:22.842633 containerd[1434]: time="2025-01-13T21:15:22.842583427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:22.843223 containerd[1434]: time="2025-01-13T21:15:22.843189788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.136707506s" Jan 13 21:15:22.843258 containerd[1434]: time="2025-01-13T21:15:22.843227428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 21:15:22.844526 containerd[1434]: time="2025-01-13T21:15:22.844489351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:15:22.846088 containerd[1434]: time="2025-01-13T21:15:22.846048036Z" level=info msg="CreateContainer within sandbox \"8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:15:22.861580 containerd[1434]: time="2025-01-13T21:15:22.861392235Z" level=info msg="CreateContainer within sandbox \"8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e9f5d9b3628eef9a259cd5f6a84b361dc8bc37c56f639ecb3aadf548547b8336\"" Jan 13 21:15:22.863294 containerd[1434]: time="2025-01-13T21:15:22.862057077Z" level=info msg="StartContainer for \"e9f5d9b3628eef9a259cd5f6a84b361dc8bc37c56f639ecb3aadf548547b8336\"" Jan 13 21:15:22.900918 systemd[1]: Started cri-containerd-e9f5d9b3628eef9a259cd5f6a84b361dc8bc37c56f639ecb3aadf548547b8336.scope - libcontainer container e9f5d9b3628eef9a259cd5f6a84b361dc8bc37c56f639ecb3aadf548547b8336. Jan 13 21:15:22.938272 containerd[1434]: time="2025-01-13T21:15:22.938219355Z" level=info msg="StartContainer for \"e9f5d9b3628eef9a259cd5f6a84b361dc8bc37c56f639ecb3aadf548547b8336\" returns successfully" Jan 13 21:15:23.084964 systemd-networkd[1376]: calia9b7810c71a: Gained IPv6LL Jan 13 21:15:23.178215 kubelet[2454]: E0113 21:15:23.178168 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:23.178711 kubelet[2454]: E0113 21:15:23.178672 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:23.191228 kubelet[2454]: I0113 21:15:23.191131 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79d978d948-phbxp" podStartSLOduration=25.053217472 podStartE2EDuration="27.191115461s" podCreationTimestamp="2025-01-13 21:14:56 +0000 UTC" firstStartedPulling="2025-01-13 21:15:20.706115161 +0000 UTC m=+35.883192703" lastFinishedPulling="2025-01-13 21:15:22.84401303 +0000 UTC m=+38.021090692" observedRunningTime="2025-01-13 21:15:23.190238698 +0000 UTC m=+38.367316240" watchObservedRunningTime="2025-01-13 21:15:23.191115461 +0000 UTC m=+38.368193043" Jan 13 21:15:23.222723 kubelet[2454]: I0113 21:15:23.222653 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t7c2j" podStartSLOduration=33.222632697 podStartE2EDuration="33.222632697s" podCreationTimestamp="2025-01-13 21:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:15:23.221339334 +0000 UTC m=+38.398416996" watchObservedRunningTime="2025-01-13 21:15:23.222632697 +0000 UTC m=+38.399710279" Jan 13 21:15:23.541825 kubelet[2454]: I0113 21:15:23.541455 2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:15:23.541937 kubelet[2454]: E0113 21:15:23.541907 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:23.660814 systemd-networkd[1376]: cali68a008584c4: Gained IPv6LL Jan 13 21:15:24.093262 containerd[1434]: time="2025-01-13T21:15:24.093216922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:24.094338 containerd[1434]: time="2025-01-13T21:15:24.094122524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 21:15:24.095286 containerd[1434]: time="2025-01-13T21:15:24.095250567Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:24.100975 containerd[1434]: time="2025-01-13T21:15:24.100289418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:24.101499 containerd[1434]: time="2025-01-13T21:15:24.101366021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.256833789s" Jan 13 21:15:24.101499 containerd[1434]: time="2025-01-13T21:15:24.101402181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 21:15:24.103131 containerd[1434]: time="2025-01-13T21:15:24.102497943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:15:24.104381 containerd[1434]: time="2025-01-13T21:15:24.104248507Z" level=info msg="CreateContainer within sandbox \"0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:15:24.122174 containerd[1434]: time="2025-01-13T21:15:24.122117148Z" level=info msg="CreateContainer within sandbox \"0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4b07bbf4791b9ad61df09fba60963cdff1cde8162b403ab0333d9af0fa2f1a02\"" Jan 13 21:15:24.124364 containerd[1434]: time="2025-01-13T21:15:24.123136271Z" level=info msg="StartContainer for \"4b07bbf4791b9ad61df09fba60963cdff1cde8162b403ab0333d9af0fa2f1a02\"" Jan 13 21:15:24.156929 systemd[1]: Started cri-containerd-4b07bbf4791b9ad61df09fba60963cdff1cde8162b403ab0333d9af0fa2f1a02.scope - libcontainer container 4b07bbf4791b9ad61df09fba60963cdff1cde8162b403ab0333d9af0fa2f1a02. Jan 13 21:15:24.182451 kubelet[2454]: I0113 21:15:24.182417 2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:15:24.183621 kubelet[2454]: E0113 21:15:24.183103 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:24.183621 kubelet[2454]: E0113 21:15:24.183459 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:24.221225 containerd[1434]: time="2025-01-13T21:15:24.220279892Z" level=info msg="StartContainer for \"4b07bbf4791b9ad61df09fba60963cdff1cde8162b403ab0333d9af0fa2f1a02\" returns successfully" Jan 13 21:15:24.375269 containerd[1434]: time="2025-01-13T21:15:24.375150046Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:24.375964 containerd[1434]: time="2025-01-13T21:15:24.375843887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:15:24.378411 containerd[1434]: time="2025-01-13T21:15:24.378359213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 275.231188ms" Jan 13 21:15:24.378411 containerd[1434]: time="2025-01-13T21:15:24.378404613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 21:15:24.380430 containerd[1434]: time="2025-01-13T21:15:24.379545736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:15:24.381657 containerd[1434]: time="2025-01-13T21:15:24.381616660Z" level=info msg="CreateContainer within sandbox \"f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:15:24.394166 containerd[1434]: time="2025-01-13T21:15:24.394107209Z" level=info msg="CreateContainer within sandbox \"f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"63149a871527c3018346e523909e555b2fbd01b449cea1e3d509d0e1397c11ac\"" Jan 13 21:15:24.394779 containerd[1434]: time="2025-01-13T21:15:24.394751570Z" level=info msg="StartContainer for \"63149a871527c3018346e523909e555b2fbd01b449cea1e3d509d0e1397c11ac\"" Jan 13 21:15:24.429881 systemd[1]: Started cri-containerd-63149a871527c3018346e523909e555b2fbd01b449cea1e3d509d0e1397c11ac.scope - libcontainer container 63149a871527c3018346e523909e555b2fbd01b449cea1e3d509d0e1397c11ac. Jan 13 21:15:24.464877 containerd[1434]: time="2025-01-13T21:15:24.464822770Z" level=info msg="StartContainer for \"63149a871527c3018346e523909e555b2fbd01b449cea1e3d509d0e1397c11ac\" returns successfully" Jan 13 21:15:25.194404 kubelet[2454]: E0113 21:15:25.194092 2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:15:25.217358 kubelet[2454]: I0113 21:15:25.215865 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79d978d948-2zkm8" podStartSLOduration=26.150914418 podStartE2EDuration="29.215845613s" podCreationTimestamp="2025-01-13 21:14:56 +0000 UTC" firstStartedPulling="2025-01-13 21:15:21.3144151 +0000 UTC m=+36.491492682" lastFinishedPulling="2025-01-13 21:15:24.379346255 +0000 UTC m=+39.556423877" observedRunningTime="2025-01-13 21:15:25.206130113 +0000 UTC m=+40.383207695" watchObservedRunningTime="2025-01-13 21:15:25.215845613 +0000 UTC m=+40.392923155" Jan 13 21:15:26.156854 containerd[1434]: time="2025-01-13T21:15:26.156795126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:26.194964 kubelet[2454]: I0113 21:15:26.194912 2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:15:26.210999 containerd[1434]: time="2025-01-13T21:15:26.210914194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 13 21:15:26.225382 containerd[1434]: time="2025-01-13T21:15:26.225106903Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:26.239151 containerd[1434]: time="2025-01-13T21:15:26.237906009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:26.239151 containerd[1434]: time="2025-01-13T21:15:26.238854850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.858412392s" Jan 13 21:15:26.239151 containerd[1434]: time="2025-01-13T21:15:26.238898051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 13 21:15:26.242192 containerd[1434]: time="2025-01-13T21:15:26.242145497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:15:26.249342 containerd[1434]: time="2025-01-13T21:15:26.249301271Z" level=info msg="CreateContainer within sandbox \"27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:15:26.383238 containerd[1434]: time="2025-01-13T21:15:26.383180420Z" level=info msg="CreateContainer within sandbox \"27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"406631b799819d45ae621e777dd3070fca059bb319d39cab0d035fdfdf1e0f96\"" Jan 13 21:15:26.383990 containerd[1434]: time="2025-01-13T21:15:26.383966902Z" level=info msg="StartContainer for \"406631b799819d45ae621e777dd3070fca059bb319d39cab0d035fdfdf1e0f96\"" Jan 13 21:15:26.427917 systemd[1]: Started cri-containerd-406631b799819d45ae621e777dd3070fca059bb319d39cab0d035fdfdf1e0f96.scope - libcontainer container 406631b799819d45ae621e777dd3070fca059bb319d39cab0d035fdfdf1e0f96. Jan 13 21:15:26.458154 containerd[1434]: time="2025-01-13T21:15:26.458017530Z" level=info msg="StartContainer for \"406631b799819d45ae621e777dd3070fca059bb319d39cab0d035fdfdf1e0f96\" returns successfully" Jan 13 21:15:26.832168 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:34892.service - OpenSSH per-connection server daemon (10.0.0.1:34892). Jan 13 21:15:26.898095 sshd[4949]: Accepted publickey for core from 10.0.0.1 port 34892 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:26.899936 sshd[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:26.912785 systemd-logind[1418]: New session 10 of user core. Jan 13 21:15:26.920874 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:15:27.105218 sshd[4949]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:27.118425 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:34892.service: Deactivated successfully. Jan 13 21:15:27.120184 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:15:27.120967 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:15:27.128021 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:34896.service - OpenSSH per-connection server daemon (10.0.0.1:34896). Jan 13 21:15:27.132605 systemd-logind[1418]: Removed session 10. Jan 13 21:15:27.164711 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 34896 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:27.166145 sshd[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:27.170207 systemd-logind[1418]: New session 11 of user core. Jan 13 21:15:27.175924 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:15:27.211420 kubelet[2454]: I0113 21:15:27.211353 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-649ff95fb-x7s9h" podStartSLOduration=25.33931365 podStartE2EDuration="30.211321174s" podCreationTimestamp="2025-01-13 21:14:57 +0000 UTC" firstStartedPulling="2025-01-13 21:15:21.36855633 +0000 UTC m=+36.545633872" lastFinishedPulling="2025-01-13 21:15:26.240563814 +0000 UTC m=+41.417641396" observedRunningTime="2025-01-13 21:15:27.210273893 +0000 UTC m=+42.387351475" watchObservedRunningTime="2025-01-13 21:15:27.211321174 +0000 UTC m=+42.388398716" Jan 13 21:15:27.248009 systemd[1]: run-containerd-runc-k8s.io-406631b799819d45ae621e777dd3070fca059bb319d39cab0d035fdfdf1e0f96-runc.Fg94uG.mount: Deactivated successfully. Jan 13 21:15:27.436144 sshd[4965]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:27.446500 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:34896.service: Deactivated successfully. Jan 13 21:15:27.452747 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:15:27.457188 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:15:27.469027 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:34908.service - OpenSSH per-connection server daemon (10.0.0.1:34908). Jan 13 21:15:27.472243 systemd-logind[1418]: Removed session 11. Jan 13 21:15:27.523018 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 34908 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:27.524769 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:27.531112 systemd-logind[1418]: New session 12 of user core. Jan 13 21:15:27.544119 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:15:27.584921 containerd[1434]: time="2025-01-13T21:15:27.584867997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:27.586233 containerd[1434]: time="2025-01-13T21:15:27.586194399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 21:15:27.593936 containerd[1434]: time="2025-01-13T21:15:27.593894054Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:27.597113 containerd[1434]: time="2025-01-13T21:15:27.596441019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:27.597113 containerd[1434]: time="2025-01-13T21:15:27.596969420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.354784683s" Jan 13 21:15:27.597113 containerd[1434]: time="2025-01-13T21:15:27.597002820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 21:15:27.601823 containerd[1434]: time="2025-01-13T21:15:27.601762429Z" level=info msg="CreateContainer within sandbox \"0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:15:27.624162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976940938.mount: Deactivated successfully. Jan 13 21:15:27.628381 containerd[1434]: time="2025-01-13T21:15:27.628234038Z" level=info msg="CreateContainer within sandbox \"0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"835c1960b22cc0320ff2da470de5520cd14ee6966cbe5e0048d4b8abc4c04a19\"" Jan 13 21:15:27.629283 containerd[1434]: time="2025-01-13T21:15:27.629128600Z" level=info msg="StartContainer for \"835c1960b22cc0320ff2da470de5520cd14ee6966cbe5e0048d4b8abc4c04a19\"" Jan 13 21:15:27.665717 systemd[1]: Started cri-containerd-835c1960b22cc0320ff2da470de5520cd14ee6966cbe5e0048d4b8abc4c04a19.scope - libcontainer container 835c1960b22cc0320ff2da470de5520cd14ee6966cbe5e0048d4b8abc4c04a19. Jan 13 21:15:27.709793 containerd[1434]: time="2025-01-13T21:15:27.709639751Z" level=info msg="StartContainer for \"835c1960b22cc0320ff2da470de5520cd14ee6966cbe5e0048d4b8abc4c04a19\" returns successfully" Jan 13 21:15:27.768163 sshd[4982]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:27.774768 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:34908.service: Deactivated successfully. Jan 13 21:15:27.776861 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:15:27.777555 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:15:27.778519 systemd-logind[1418]: Removed session 12. Jan 13 21:15:28.002054 kubelet[2454]: I0113 21:15:28.001933 2454 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:15:28.003546 kubelet[2454]: I0113 21:15:28.003504 2454 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:15:28.225315 kubelet[2454]: I0113 21:15:28.225239 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-st9qh" podStartSLOduration=24.411191624 podStartE2EDuration="31.225217815s" podCreationTimestamp="2025-01-13 21:14:57 +0000 UTC" firstStartedPulling="2025-01-13 21:15:20.784100111 +0000 UTC m=+35.961177653" lastFinishedPulling="2025-01-13 21:15:27.598126262 +0000 UTC m=+42.775203844" observedRunningTime="2025-01-13 21:15:28.225023774 +0000 UTC m=+43.402101356" watchObservedRunningTime="2025-01-13 21:15:28.225217815 +0000 UTC m=+43.402295397" Jan 13 21:15:32.781382 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:40514.service - OpenSSH per-connection server daemon (10.0.0.1:40514). Jan 13 21:15:32.850772 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 40514 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:32.852084 sshd[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:32.856182 systemd-logind[1418]: New session 13 of user core. Jan 13 21:15:32.868851 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:15:33.121428 sshd[5063]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:33.136450 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:40514.service: Deactivated successfully. Jan 13 21:15:33.139270 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:15:33.140824 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:15:33.147368 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:40528.service - OpenSSH per-connection server daemon (10.0.0.1:40528). Jan 13 21:15:33.148749 systemd-logind[1418]: Removed session 13. Jan 13 21:15:33.182819 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 40528 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:33.184251 sshd[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:33.188773 systemd-logind[1418]: New session 14 of user core. Jan 13 21:15:33.194874 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:15:33.412939 sshd[5078]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:33.422610 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:40528.service: Deactivated successfully. Jan 13 21:15:33.424556 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:15:33.426044 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:15:33.431292 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:40540.service - OpenSSH per-connection server daemon (10.0.0.1:40540). Jan 13 21:15:33.432959 systemd-logind[1418]: Removed session 14. Jan 13 21:15:33.472262 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 40540 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:33.473662 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:33.478760 systemd-logind[1418]: New session 15 of user core. Jan 13 21:15:33.483867 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:15:34.926838 sshd[5090]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:34.935939 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:40540.service: Deactivated successfully. Jan 13 21:15:34.937559 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:15:34.942174 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:15:34.948258 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:40552.service - OpenSSH per-connection server daemon (10.0.0.1:40552). Jan 13 21:15:34.950890 systemd-logind[1418]: Removed session 15. Jan 13 21:15:34.996276 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 40552 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:34.997782 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:35.001504 systemd-logind[1418]: New session 16 of user core. Jan 13 21:15:35.010935 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:15:35.339571 sshd[5112]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:35.352205 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:40552.service: Deactivated successfully. Jan 13 21:15:35.354046 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:15:35.355857 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:15:35.366537 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:40568.service - OpenSSH per-connection server daemon (10.0.0.1:40568). Jan 13 21:15:35.367899 systemd-logind[1418]: Removed session 16. Jan 13 21:15:35.400878 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 40568 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:35.402162 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:35.406903 systemd-logind[1418]: New session 17 of user core. Jan 13 21:15:35.415893 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:15:35.541051 sshd[5125]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:35.545428 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:40568.service: Deactivated successfully. Jan 13 21:15:35.547247 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:15:35.548212 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:15:35.549004 systemd-logind[1418]: Removed session 17. Jan 13 21:15:40.566431 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:40578.service - OpenSSH per-connection server daemon (10.0.0.1:40578). Jan 13 21:15:40.605085 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 40578 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:40.606669 sshd[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:40.611174 systemd-logind[1418]: New session 18 of user core. Jan 13 21:15:40.619894 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:15:40.624535 kubelet[2454]: I0113 21:15:40.624482 2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:15:40.839682 sshd[5145]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:40.846341 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:40578.service: Deactivated successfully. Jan 13 21:15:40.850112 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:15:40.851116 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:15:40.852890 systemd-logind[1418]: Removed session 18. Jan 13 21:15:44.914014 containerd[1434]: time="2025-01-13T21:15:44.913965263Z" level=info msg="StopPodSandbox for \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\"" Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:44.997 [WARNING][5183] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0", GenerateName:"calico-kube-controllers-649ff95fb-", Namespace:"calico-system", SelfLink:"", UID:"fc42e8bc-befe-48fd-a086-67842b49de77", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649ff95fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245", Pod:"calico-kube-controllers-649ff95fb-x7s9h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9b7810c71a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:44.998 [INFO][5183] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:44.998 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" iface="eth0" netns="" Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:44.998 [INFO][5183] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:44.998 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:45.037 [INFO][5191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:45.037 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:45.038 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:45.049 [WARNING][5191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:45.049 [INFO][5191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:45.051 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.058183 containerd[1434]: 2025-01-13 21:15:45.054 [INFO][5183] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:45.059086 containerd[1434]: time="2025-01-13T21:15:45.058458417Z" level=info msg="TearDown network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\" successfully" Jan 13 21:15:45.059086 containerd[1434]: time="2025-01-13T21:15:45.058488658Z" level=info msg="StopPodSandbox for \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\" returns successfully" Jan 13 21:15:45.059193 containerd[1434]: time="2025-01-13T21:15:45.059160409Z" level=info msg="RemovePodSandbox for \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\"" Jan 13 21:15:45.064559 containerd[1434]: time="2025-01-13T21:15:45.064495613Z" level=info msg="Forcibly stopping sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\"" Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.117 [WARNING][5214] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0", GenerateName:"calico-kube-controllers-649ff95fb-", Namespace:"calico-system", SelfLink:"", UID:"fc42e8bc-befe-48fd-a086-67842b49de77", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649ff95fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27ed870435ce315b162576de2b3a3be6444bae3cb38610a55468ded177388245", Pod:"calico-kube-controllers-649ff95fb-x7s9h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9b7810c71a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.118 [INFO][5214] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.118 [INFO][5214] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" iface="eth0" netns="" Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.118 [INFO][5214] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.118 [INFO][5214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.140 [INFO][5221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.140 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.140 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.151 [WARNING][5221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.151 [INFO][5221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" HandleID="k8s-pod-network.a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Workload="localhost-k8s-calico--kube--controllers--649ff95fb--x7s9h-eth0" Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.153 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.158370 containerd[1434]: 2025-01-13 21:15:45.156 [INFO][5214] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156" Jan 13 21:15:45.160904 containerd[1434]: time="2025-01-13T21:15:45.158610392Z" level=info msg="TearDown network for sandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\" successfully" Jan 13 21:15:45.177275 containerd[1434]: time="2025-01-13T21:15:45.177116477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:15:45.177275 containerd[1434]: time="2025-01-13T21:15:45.177245963Z" level=info msg="RemovePodSandbox \"a3da4175aa673e0ee67bd76802f5b4d5b22e54b712944b48d594d5bdcc4f9156\" returns successfully" Jan 13 21:15:45.178879 containerd[1434]: time="2025-01-13T21:15:45.178838836Z" level=info msg="StopPodSandbox for \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\"" Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.214 [WARNING][5244] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--st9qh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67345a72-9d66-4d9b-8d45-698aed92c23c", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993", Pod:"csi-node-driver-st9qh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib345dfa7a21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.215 [INFO][5244] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.215 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" iface="eth0" netns="" Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.215 [INFO][5244] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.215 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.236 [INFO][5251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.236 [INFO][5251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.236 [INFO][5251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.252 [WARNING][5251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.252 [INFO][5251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.254 [INFO][5251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.259505 containerd[1434]: 2025-01-13 21:15:45.256 [INFO][5244] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:45.260226 containerd[1434]: time="2025-01-13T21:15:45.259753532Z" level=info msg="TearDown network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\" successfully" Jan 13 21:15:45.260226 containerd[1434]: time="2025-01-13T21:15:45.259784814Z" level=info msg="StopPodSandbox for \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\" returns successfully" Jan 13 21:15:45.261291 containerd[1434]: time="2025-01-13T21:15:45.260363720Z" level=info msg="RemovePodSandbox for \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\"" Jan 13 21:15:45.261291 containerd[1434]: time="2025-01-13T21:15:45.260394122Z" level=info msg="Forcibly stopping sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\"" Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.296 [WARNING][5274] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--st9qh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67345a72-9d66-4d9b-8d45-698aed92c23c", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0971861583694845e19750b424e2582906449996d89f2ba729c56c46b83c2993", Pod:"csi-node-driver-st9qh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib345dfa7a21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.296 [INFO][5274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.296 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" iface="eth0" netns="" Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.296 [INFO][5274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.297 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.321 [INFO][5281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.321 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.321 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.329 [WARNING][5281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.329 [INFO][5281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" HandleID="k8s-pod-network.cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Workload="localhost-k8s-csi--node--driver--st9qh-eth0" Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.331 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.334197 containerd[1434]: 2025-01-13 21:15:45.332 [INFO][5274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8" Jan 13 21:15:45.334781 containerd[1434]: time="2025-01-13T21:15:45.334750838Z" level=info msg="TearDown network for sandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\" successfully" Jan 13 21:15:45.337816 containerd[1434]: time="2025-01-13T21:15:45.337781497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:15:45.337983 containerd[1434]: time="2025-01-13T21:15:45.337963225Z" level=info msg="RemovePodSandbox \"cccb7205aa79c7ea0deaae1d2a32d8972a8301305da0eddd0cdf6e347de11ff8\" returns successfully" Jan 13 21:15:45.338516 containerd[1434]: time="2025-01-13T21:15:45.338490009Z" level=info msg="StopPodSandbox for \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\"" Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.378 [WARNING][5303] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--pcklw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6546b16e-ed32-4e3e-8156-6c685dd971ab", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25", Pod:"coredns-6f6b679f8f-pcklw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22cd610d14b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.378 [INFO][5303] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.378 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" iface="eth0" netns="" Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.378 [INFO][5303] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.378 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.403 [INFO][5310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.403 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.403 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.412 [WARNING][5310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.412 [INFO][5310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.413 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.416439 containerd[1434]: 2025-01-13 21:15:45.415 [INFO][5303] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:45.416984 containerd[1434]: time="2025-01-13T21:15:45.416460411Z" level=info msg="TearDown network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\" successfully" Jan 13 21:15:45.416984 containerd[1434]: time="2025-01-13T21:15:45.416485892Z" level=info msg="StopPodSandbox for \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\" returns successfully" Jan 13 21:15:45.417498 containerd[1434]: time="2025-01-13T21:15:45.417266128Z" level=info msg="RemovePodSandbox for \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\"" Jan 13 21:15:45.417498 containerd[1434]: time="2025-01-13T21:15:45.417309450Z" level=info msg="Forcibly stopping sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\"" Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.450 [WARNING][5333] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--pcklw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6546b16e-ed32-4e3e-8156-6c685dd971ab", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd595a8b4eca0eb8ce0e8deeaec12857c2d4d820dc0b55a4ed2e8b7fbdb33b25", Pod:"coredns-6f6b679f8f-pcklw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22cd610d14b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.450 [INFO][5333] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.450 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" iface="eth0" netns="" Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.450 [INFO][5333] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.450 [INFO][5333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.470 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.470 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.470 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.478 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.478 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" HandleID="k8s-pod-network.49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Workload="localhost-k8s-coredns--6f6b679f8f--pcklw-eth0" Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.479 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.483780 containerd[1434]: 2025-01-13 21:15:45.481 [INFO][5333] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1" Jan 13 21:15:45.483780 containerd[1434]: time="2025-01-13T21:15:45.482887845Z" level=info msg="TearDown network for sandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\" successfully" Jan 13 21:15:45.487676 containerd[1434]: time="2025-01-13T21:15:45.487624182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:15:45.493705 containerd[1434]: time="2025-01-13T21:15:45.493645257Z" level=info msg="RemovePodSandbox \"49ceb3ec3f78ca93d3db302af1463b4bfd9fc3863f2aaed382d90d4f7b446ad1\" returns successfully" Jan 13 21:15:45.494198 containerd[1434]: time="2025-01-13T21:15:45.494167321Z" level=info msg="StopPodSandbox for \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\"" Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.525 [WARNING][5362] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6deed8be-443c-4c20-8288-97ef2040b5e5", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a", Pod:"coredns-6f6b679f8f-t7c2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68a008584c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.525 [INFO][5362] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.525 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" iface="eth0" netns="" Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.525 [INFO][5362] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.525 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.543 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.543 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.543 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.555 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.555 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.556 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.559562 containerd[1434]: 2025-01-13 21:15:45.558 [INFO][5362] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:45.559960 containerd[1434]: time="2025-01-13T21:15:45.559626511Z" level=info msg="TearDown network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\" successfully" Jan 13 21:15:45.559960 containerd[1434]: time="2025-01-13T21:15:45.559664393Z" level=info msg="StopPodSandbox for \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\" returns successfully" Jan 13 21:15:45.560204 containerd[1434]: time="2025-01-13T21:15:45.560151695Z" level=info msg="RemovePodSandbox for \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\"" Jan 13 21:15:45.560238 containerd[1434]: time="2025-01-13T21:15:45.560203017Z" level=info msg="Forcibly stopping sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\"" Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.595 [WARNING][5392] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6deed8be-443c-4c20-8288-97ef2040b5e5", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e4bfa78bbee4ab77b277bd409aaf7206bb5c7a08ba71784f71e8c308012018a", Pod:"coredns-6f6b679f8f-t7c2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68a008584c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.595 [INFO][5392] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.595 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" iface="eth0" netns="" Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.595 [INFO][5392] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.595 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.613 [INFO][5399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.613 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.613 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.622 [WARNING][5399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.622 [INFO][5399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" HandleID="k8s-pod-network.57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Workload="localhost-k8s-coredns--6f6b679f8f--t7c2j-eth0" Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.623 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.626744 containerd[1434]: 2025-01-13 21:15:45.625 [INFO][5392] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357" Jan 13 21:15:45.627286 containerd[1434]: time="2025-01-13T21:15:45.626790179Z" level=info msg="TearDown network for sandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\" successfully" Jan 13 21:15:45.629451 containerd[1434]: time="2025-01-13T21:15:45.629390818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:15:45.629550 containerd[1434]: time="2025-01-13T21:15:45.629524144Z" level=info msg="RemovePodSandbox \"57ecb9b8ccdf0ab6b04474895c30e66b2c063daae7f98e21907bf3549d822357\" returns successfully" Jan 13 21:15:45.630316 containerd[1434]: time="2025-01-13T21:15:45.630041768Z" level=info msg="StopPodSandbox for \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\"" Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.663 [WARNING][5422] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0", GenerateName:"calico-apiserver-79d978d948-", Namespace:"calico-apiserver", SelfLink:"", UID:"48748cb0-5484-4170-8079-80e03c4d2ae3", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d978d948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f", Pod:"calico-apiserver-79d978d948-2zkm8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2eebf9800c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.663 [INFO][5422] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.663 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" iface="eth0" netns="" Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.663 [INFO][5422] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.663 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.681 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.681 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.681 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.689 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.690 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.691 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.694552 containerd[1434]: 2025-01-13 21:15:45.692 [INFO][5422] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:45.694552 containerd[1434]: time="2025-01-13T21:15:45.694433989Z" level=info msg="TearDown network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\" successfully" Jan 13 21:15:45.694552 containerd[1434]: time="2025-01-13T21:15:45.694457030Z" level=info msg="StopPodSandbox for \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\" returns successfully" Jan 13 21:15:45.695031 containerd[1434]: time="2025-01-13T21:15:45.694932612Z" level=info msg="RemovePodSandbox for \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\"" Jan 13 21:15:45.695031 containerd[1434]: time="2025-01-13T21:15:45.694961853Z" level=info msg="Forcibly stopping sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\"" Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.729 [WARNING][5453] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0", GenerateName:"calico-apiserver-79d978d948-", Namespace:"calico-apiserver", SelfLink:"", UID:"48748cb0-5484-4170-8079-80e03c4d2ae3", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d978d948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b751fd01c3faaadefc82799ccf1e5d788c13f58fdae0ddaea5f9ed4d3b976f", Pod:"calico-apiserver-79d978d948-2zkm8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2eebf9800c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.729 [INFO][5453] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.729 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" iface="eth0" netns="" Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.729 [INFO][5453] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.729 [INFO][5453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.746 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.746 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.746 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.754 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.754 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" HandleID="k8s-pod-network.3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Workload="localhost-k8s-calico--apiserver--79d978d948--2zkm8-eth0" Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.756 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.759336 containerd[1434]: 2025-01-13 21:15:45.757 [INFO][5453] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188" Jan 13 21:15:45.759336 containerd[1434]: time="2025-01-13T21:15:45.759305113Z" level=info msg="TearDown network for sandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\" successfully" Jan 13 21:15:45.769594 containerd[1434]: time="2025-01-13T21:15:45.769546580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:15:45.769677 containerd[1434]: time="2025-01-13T21:15:45.769619144Z" level=info msg="RemovePodSandbox \"3fc229edecaa0e6fb512b5968a69dbac36c89051849e486433976bef55302188\" returns successfully" Jan 13 21:15:45.770159 containerd[1434]: time="2025-01-13T21:15:45.770124967Z" level=info msg="StopPodSandbox for \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\"" Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.802 [WARNING][5482] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0", GenerateName:"calico-apiserver-79d978d948-", Namespace:"calico-apiserver", SelfLink:"", UID:"eba042e6-b417-4f58-b615-4e861e1468fa", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d978d948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937", Pod:"calico-apiserver-79d978d948-phbxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bc608c2867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.803 [INFO][5482] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.803 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" iface="eth0" netns="" Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.803 [INFO][5482] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.803 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.821 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.821 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.821 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.829 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.829 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.832 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.835154 containerd[1434]: 2025-01-13 21:15:45.833 [INFO][5482] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:45.835154 containerd[1434]: time="2025-01-13T21:15:45.835036612Z" level=info msg="TearDown network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\" successfully" Jan 13 21:15:45.835154 containerd[1434]: time="2025-01-13T21:15:45.835061253Z" level=info msg="StopPodSandbox for \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\" returns successfully" Jan 13 21:15:45.836345 containerd[1434]: time="2025-01-13T21:15:45.836020417Z" level=info msg="RemovePodSandbox for \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\"" Jan 13 21:15:45.836345 containerd[1434]: time="2025-01-13T21:15:45.836083980Z" level=info msg="Forcibly stopping sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\"" Jan 13 21:15:45.873084 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:57178.service - OpenSSH per-connection server daemon (10.0.0.1:57178). Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.874 [WARNING][5514] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0", GenerateName:"calico-apiserver-79d978d948-", Namespace:"calico-apiserver", SelfLink:"", UID:"eba042e6-b417-4f58-b615-4e861e1468fa", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d978d948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c65932b3cf2063219d52aae862cb81349ecf476952b527c0322649507ab3937", Pod:"calico-apiserver-79d978d948-phbxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bc608c2867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.877 [INFO][5514] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.877 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" iface="eth0" netns="" Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.877 [INFO][5514] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.877 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.897 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.897 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.897 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.909 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.909 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" HandleID="k8s-pod-network.5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Workload="localhost-k8s-calico--apiserver--79d978d948--phbxp-eth0" Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.910 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:15:45.913741 containerd[1434]: 2025-01-13 21:15:45.912 [INFO][5514] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de" Jan 13 21:15:45.914322 containerd[1434]: time="2025-01-13T21:15:45.913772769Z" level=info msg="TearDown network for sandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\" successfully" Jan 13 21:15:45.914354 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 57178 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:45.917116 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:45.917986 containerd[1434]: time="2025-01-13T21:15:45.917637345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:15:45.917986 containerd[1434]: time="2025-01-13T21:15:45.917773712Z" level=info msg="RemovePodSandbox \"5aecf0c1274be05dd42d4534f4059ab89d0b47f22e24fba1edca2b3138e3c9de\" returns successfully" Jan 13 21:15:45.921478 systemd-logind[1418]: New session 19 of user core. Jan 13 21:15:45.928865 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:15:46.084828 sshd[5519]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:46.088970 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:57178.service: Deactivated successfully. Jan 13 21:15:46.091133 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:15:46.092426 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:15:46.094044 systemd-logind[1418]: Removed session 19. Jan 13 21:15:51.095428 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:57180.service - OpenSSH per-connection server daemon (10.0.0.1:57180). Jan 13 21:15:51.136476 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 57180 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:51.139041 sshd[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:51.142852 systemd-logind[1418]: New session 20 of user core. Jan 13 21:15:51.155908 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:15:51.283994 sshd[5547]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:51.288294 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:57180.service: Deactivated successfully. Jan 13 21:15:51.291443 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:15:51.292107 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:15:51.293339 systemd-logind[1418]: Removed session 20.