Jan 17 12:22:10.886247 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 12:22:10.886269 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:22:10.886279 kernel: KASLR enabled Jan 17 12:22:10.886285 kernel: efi: EFI v2.7 by EDK II Jan 17 12:22:10.886291 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 17 12:22:10.886297 kernel: random: crng init done Jan 17 12:22:10.886304 kernel: ACPI: Early table checksum verification disabled Jan 17 12:22:10.886310 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 17 12:22:10.886317 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:22:10.886324 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886330 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886337 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886343 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886349 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886375 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886384 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886391 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886397 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:10.886404 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 17 12:22:10.886417 kernel: NUMA: Failed to initialise from firmware Jan 17 12:22:10.886424 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:22:10.886431 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 17 12:22:10.886437 kernel: Zone ranges: Jan 17 12:22:10.886444 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:22:10.886450 kernel: DMA32 empty Jan 17 12:22:10.886458 kernel: Normal empty Jan 17 12:22:10.886465 kernel: Movable zone start for each node Jan 17 12:22:10.886471 kernel: Early memory node ranges Jan 17 12:22:10.886478 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 17 12:22:10.886484 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 17 12:22:10.886491 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 17 12:22:10.886497 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 17 12:22:10.886504 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 17 12:22:10.886510 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 17 12:22:10.886516 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 17 12:22:10.886523 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:22:10.886529 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 17 12:22:10.886538 kernel: psci: probing for conduit method from ACPI. Jan 17 12:22:10.886544 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 12:22:10.886551 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:22:10.886560 kernel: psci: Trusted OS migration not required Jan 17 12:22:10.886567 kernel: psci: SMC Calling Convention v1.1 Jan 17 12:22:10.886574 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 12:22:10.886582 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:22:10.886589 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:22:10.886596 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 17 12:22:10.886603 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:22:10.886610 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:22:10.886617 kernel: CPU features: detected: Hardware dirty bit management Jan 17 12:22:10.886624 kernel: CPU features: detected: Spectre-v4 Jan 17 12:22:10.886631 kernel: CPU features: detected: Spectre-BHB Jan 17 12:22:10.886638 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 12:22:10.886644 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 12:22:10.886652 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 12:22:10.886659 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 12:22:10.886666 kernel: alternatives: applying boot alternatives Jan 17 12:22:10.886674 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:22:10.886681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:22:10.886688 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:22:10.886695 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:22:10.886701 kernel: Fallback order for Node 0: 0 Jan 17 12:22:10.886708 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 17 12:22:10.886715 kernel: Policy zone: DMA Jan 17 12:22:10.886721 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:22:10.886729 kernel: software IO TLB: area num 4. Jan 17 12:22:10.886736 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 17 12:22:10.886743 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 17 12:22:10.886751 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:22:10.886782 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:22:10.886792 kernel: rcu: RCU event tracing is enabled. Jan 17 12:22:10.886799 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:22:10.886806 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:22:10.886813 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:22:10.886820 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:22:10.886827 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:22:10.886833 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:22:10.886841 kernel: GICv3: 256 SPIs implemented Jan 17 12:22:10.886857 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:22:10.886864 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:22:10.886870 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 12:22:10.886877 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 12:22:10.886883 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 12:22:10.886890 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 12:22:10.886897 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 12:22:10.886904 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 17 12:22:10.886911 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 17 12:22:10.886917 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:22:10.886925 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:22:10.886932 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 12:22:10.886939 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 12:22:10.886946 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 12:22:10.886952 kernel: arm-pv: using stolen time PV Jan 17 12:22:10.886959 kernel: Console: colour dummy device 80x25 Jan 17 12:22:10.886966 kernel: ACPI: Core revision 20230628 Jan 17 12:22:10.886973 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 12:22:10.886980 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:22:10.886987 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:22:10.886996 kernel: landlock: Up and running. Jan 17 12:22:10.887002 kernel: SELinux: Initializing. Jan 17 12:22:10.887009 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:22:10.887016 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:22:10.887023 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:22:10.887030 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:22:10.887037 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:22:10.887044 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:22:10.887051 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 12:22:10.887059 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 12:22:10.887066 kernel: Remapping and enabling EFI services. Jan 17 12:22:10.887072 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:22:10.887079 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:22:10.887086 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 12:22:10.887093 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 17 12:22:10.887100 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:22:10.887107 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 12:22:10.887114 kernel: Detected PIPT I-cache on CPU2 Jan 17 12:22:10.887121 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 17 12:22:10.887129 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 17 12:22:10.887136 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:22:10.887148 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 17 12:22:10.887156 kernel: Detected PIPT I-cache on CPU3 Jan 17 12:22:10.887163 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 17 12:22:10.887171 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 17 12:22:10.887178 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:22:10.887185 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 17 12:22:10.887192 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:22:10.887201 kernel: SMP: Total of 4 processors activated. Jan 17 12:22:10.887208 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:22:10.887215 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 12:22:10.887224 kernel: CPU features: detected: Common not Private translations Jan 17 12:22:10.887232 kernel: CPU features: detected: CRC32 instructions Jan 17 12:22:10.887239 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 12:22:10.887246 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 12:22:10.887253 kernel: CPU features: detected: LSE atomic instructions Jan 17 12:22:10.887262 kernel: CPU features: detected: Privileged Access Never Jan 17 12:22:10.887269 kernel: CPU features: detected: RAS Extension Support Jan 17 12:22:10.887276 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 12:22:10.887284 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:22:10.887291 kernel: alternatives: applying system-wide alternatives Jan 17 12:22:10.887298 kernel: devtmpfs: initialized Jan 17 12:22:10.887305 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:22:10.887313 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:22:10.887320 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:22:10.887328 kernel: SMBIOS 3.0.0 present. Jan 17 12:22:10.887336 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 17 12:22:10.887343 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:22:10.887386 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:22:10.887395 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:22:10.887402 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:22:10.887413 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:22:10.887421 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 17 12:22:10.887428 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:22:10.887438 kernel: cpuidle: using governor menu Jan 17 12:22:10.887445 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:22:10.887452 kernel: ASID allocator initialised with 32768 entries Jan 17 12:22:10.887460 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:22:10.887467 kernel: Serial: AMBA PL011 UART driver Jan 17 12:22:10.887474 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 12:22:10.887481 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 12:22:10.887488 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:22:10.887496 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:22:10.887504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:22:10.887512 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:22:10.887519 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:22:10.887526 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:22:10.887534 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:22:10.887541 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:22:10.887548 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:22:10.887555 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:22:10.887562 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:22:10.887571 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:22:10.887578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:22:10.887585 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:22:10.887593 kernel: ACPI: Interpreter enabled Jan 17 12:22:10.887600 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:22:10.887607 kernel: ACPI: MCFG table detected, 1 entries Jan 17 12:22:10.887614 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 12:22:10.887622 kernel: printk: console [ttyAMA0] enabled Jan 17 12:22:10.887629 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:22:10.887764 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:22:10.887837 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:22:10.887930 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:22:10.887997 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 12:22:10.888059 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 12:22:10.888069 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 12:22:10.888076 kernel: PCI host bridge to bus 0000:00 Jan 17 12:22:10.888148 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 12:22:10.888207 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 12:22:10.888264 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 12:22:10.888320 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:22:10.888415 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 12:22:10.888500 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:22:10.888571 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 17 12:22:10.888636 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 17 12:22:10.888704 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 12:22:10.888783 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 12:22:10.888848 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 17 12:22:10.888912 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 17 12:22:10.888970 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 12:22:10.889029 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 12:22:10.889086 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 12:22:10.889096 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 12:22:10.889103 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 12:22:10.889111 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 12:22:10.889118 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 12:22:10.889126 kernel: iommu: Default domain type: Translated Jan 17 12:22:10.889133 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:22:10.889142 kernel: efivars: Registered efivars operations Jan 17 12:22:10.889149 kernel: vgaarb: loaded Jan 17 12:22:10.889157 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:22:10.889164 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:22:10.889172 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:22:10.889179 kernel: pnp: PnP ACPI init Jan 17 12:22:10.889247 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 12:22:10.889257 kernel: pnp: PnP ACPI: found 1 devices Jan 17 12:22:10.889264 kernel: NET: Registered PF_INET protocol family Jan 17 12:22:10.889274 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:22:10.889281 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:22:10.889289 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:22:10.889296 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:22:10.889303 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:22:10.889311 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:22:10.889318 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:22:10.889325 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:22:10.889335 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:22:10.889342 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:22:10.889357 kernel: kvm [1]: HYP mode not available Jan 17 12:22:10.889365 kernel: Initialise system trusted keyrings Jan 17 12:22:10.889373 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:22:10.889380 kernel: Key type asymmetric registered Jan 17 12:22:10.889387 kernel: Asymmetric key parser 'x509' registered Jan 17 12:22:10.889394 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:22:10.889402 kernel: io scheduler mq-deadline registered Jan 17 12:22:10.889413 kernel: io scheduler kyber registered Jan 17 12:22:10.889423 kernel: io scheduler bfq registered Jan 17 12:22:10.889431 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 12:22:10.889438 kernel: ACPI: button: Power Button [PWRB] Jan 17 12:22:10.889445 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 12:22:10.889515 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 17 12:22:10.889525 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:22:10.889532 kernel: thunder_xcv, ver 1.0 Jan 17 12:22:10.889540 kernel: thunder_bgx, ver 1.0 Jan 17 12:22:10.889547 kernel: nicpf, ver 1.0 Jan 17 12:22:10.889556 kernel: nicvf, ver 1.0 Jan 17 12:22:10.889628 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:22:10.889690 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:22:10 UTC (1737116530) Jan 17 12:22:10.889700 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:22:10.889708 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 12:22:10.889715 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:22:10.889722 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:22:10.889730 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:22:10.889739 kernel: Segment Routing with IPv6 Jan 17 12:22:10.889746 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:22:10.889753 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:22:10.889760 kernel: Key type dns_resolver registered Jan 17 12:22:10.889768 kernel: registered taskstats version 1 Jan 17 12:22:10.889775 kernel: Loading compiled-in X.509 certificates Jan 17 12:22:10.889783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:22:10.889790 kernel: Key type .fscrypt registered Jan 17 12:22:10.889797 kernel: Key type fscrypt-provisioning registered Jan 17 12:22:10.889806 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:22:10.889814 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:22:10.889821 kernel: ima: No architecture policies found Jan 17 12:22:10.889828 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:22:10.889836 kernel: clk: Disabling unused clocks Jan 17 12:22:10.889843 kernel: Freeing unused kernel memory: 39360K Jan 17 12:22:10.889850 kernel: Run /init as init process Jan 17 12:22:10.889857 kernel: with arguments: Jan 17 12:22:10.889864 kernel: /init Jan 17 12:22:10.889873 kernel: with environment: Jan 17 12:22:10.889880 kernel: HOME=/ Jan 17 12:22:10.889887 kernel: TERM=linux Jan 17 12:22:10.889894 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:22:10.889904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:22:10.889913 systemd[1]: Detected virtualization kvm. Jan 17 12:22:10.889921 systemd[1]: Detected architecture arm64. Jan 17 12:22:10.889930 systemd[1]: Running in initrd. Jan 17 12:22:10.889938 systemd[1]: No hostname configured, using default hostname. Jan 17 12:22:10.889946 systemd[1]: Hostname set to . Jan 17 12:22:10.889954 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:22:10.889962 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:22:10.889970 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:22:10.889978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:22:10.889986 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:22:10.889996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:22:10.890003 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:22:10.890011 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:22:10.890021 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:22:10.890029 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:22:10.890037 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:22:10.890045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:22:10.890054 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:22:10.890062 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:22:10.890070 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:22:10.890078 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:22:10.890085 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:22:10.890093 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:22:10.890101 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:22:10.890109 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:22:10.890117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:22:10.890126 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:22:10.890135 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:22:10.890143 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:22:10.890150 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:22:10.890159 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:22:10.890167 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:22:10.890175 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:22:10.890182 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:22:10.890192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:22:10.890199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:22:10.890207 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:22:10.890215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:22:10.890223 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:22:10.890231 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:22:10.890256 systemd-journald[238]: Collecting audit messages is disabled. Jan 17 12:22:10.890274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:10.890282 systemd-journald[238]: Journal started Jan 17 12:22:10.890302 systemd-journald[238]: Runtime Journal (/run/log/journal/34ef4af05e64410cb65f8cfd16d1db2d) is 5.9M, max 47.3M, 41.4M free. Jan 17 12:22:10.886828 systemd-modules-load[239]: Inserted module 'overlay' Jan 17 12:22:10.893086 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:22:10.896235 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:22:10.897970 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:22:10.904485 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:22:10.900372 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:22:10.904516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:22:10.908706 kernel: Bridge firewalling registered Jan 17 12:22:10.907526 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 17 12:22:10.908838 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:22:10.912722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:22:10.915639 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:22:10.917268 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:22:10.925397 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:22:10.927746 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:22:10.929784 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:22:10.933489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:22:10.941441 dracut-cmdline[273]: dracut-dracut-053 Jan 17 12:22:10.943846 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:22:10.961622 systemd-resolved[276]: Positive Trust Anchors: Jan 17 12:22:10.961640 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:22:10.961672 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:22:10.966283 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 17 12:22:10.967219 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:22:10.971383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:22:11.013376 kernel: SCSI subsystem initialized Jan 17 12:22:11.017367 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:22:11.027389 kernel: iscsi: registered transport (tcp) Jan 17 12:22:11.039567 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:22:11.039585 kernel: QLogic iSCSI HBA Driver Jan 17 12:22:11.079890 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:22:11.090493 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:22:11.106431 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:22:11.106465 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:22:11.107468 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:22:11.155390 kernel: raid6: neonx8 gen() 15745 MB/s Jan 17 12:22:11.172394 kernel: raid6: neonx4 gen() 15612 MB/s Jan 17 12:22:11.189383 kernel: raid6: neonx2 gen() 13202 MB/s Jan 17 12:22:11.206384 kernel: raid6: neonx1 gen() 10450 MB/s Jan 17 12:22:11.223382 kernel: raid6: int64x8 gen() 6941 MB/s Jan 17 12:22:11.240382 kernel: raid6: int64x4 gen() 7335 MB/s Jan 17 12:22:11.257383 kernel: raid6: int64x2 gen() 6115 MB/s Jan 17 12:22:11.274461 kernel: raid6: int64x1 gen() 5043 MB/s Jan 17 12:22:11.274472 kernel: raid6: using algorithm neonx8 gen() 15745 MB/s Jan 17 12:22:11.292453 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Jan 17 12:22:11.292480 kernel: raid6: using neon recovery algorithm Jan 17 12:22:11.297649 kernel: xor: measuring software checksum speed Jan 17 12:22:11.297676 kernel: 8regs : 19773 MB/sec Jan 17 12:22:11.298373 kernel: 32regs : 19646 MB/sec Jan 17 12:22:11.299541 kernel: arm64_neon : 22349 MB/sec Jan 17 12:22:11.299565 kernel: xor: using function: arm64_neon (22349 MB/sec) Jan 17 12:22:11.348377 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:22:11.358839 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:22:11.370536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:22:11.382062 systemd-udevd[458]: Using default interface naming scheme 'v255'. Jan 17 12:22:11.385127 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:22:11.387651 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:22:11.401784 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jan 17 12:22:11.427444 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:22:11.438470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:22:11.475302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:22:11.481541 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:22:11.494382 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:22:11.495840 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:22:11.498020 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:22:11.500576 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:22:11.507513 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:22:11.518389 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:22:11.527771 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 17 12:22:11.545336 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:22:11.545455 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:22:11.545467 kernel: GPT:9289727 != 19775487 Jan 17 12:22:11.545476 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:22:11.545492 kernel: GPT:9289727 != 19775487 Jan 17 12:22:11.545501 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:22:11.545510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:11.531324 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:22:11.531394 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:22:11.533881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:22:11.535086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:22:11.535164 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:11.539747 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:22:11.557819 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (518) Jan 17 12:22:11.550675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:22:11.560405 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (519) Jan 17 12:22:11.566208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:11.576364 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:22:11.580808 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:22:11.584693 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:22:11.585890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:22:11.591451 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:22:11.600478 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:22:11.602168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:22:11.607329 disk-uuid[549]: Primary Header is updated. Jan 17 12:22:11.607329 disk-uuid[549]: Secondary Entries is updated. Jan 17 12:22:11.607329 disk-uuid[549]: Secondary Header is updated. Jan 17 12:22:11.610462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:11.622414 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:11.622665 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:22:12.629394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:12.629849 disk-uuid[550]: The operation has completed successfully. Jan 17 12:22:12.651794 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:22:12.651888 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:22:12.669554 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:22:12.672191 sh[572]: Success Jan 17 12:22:12.685462 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:22:12.714446 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:22:12.731733 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:22:12.735060 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:22:12.743680 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:22:12.743711 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:22:12.743721 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:22:12.745512 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:22:12.746364 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:22:12.749507 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:22:12.750772 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:22:12.763543 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:22:12.765061 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:22:12.773734 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:22:12.773778 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:22:12.774602 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:22:12.776610 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:22:12.784826 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:22:12.786646 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:22:12.791093 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:22:12.798491 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:22:12.861729 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:22:12.869499 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:22:12.893763 systemd-networkd[763]: lo: Link UP Jan 17 12:22:12.893774 systemd-networkd[763]: lo: Gained carrier Jan 17 12:22:12.894721 systemd-networkd[763]: Enumeration completed Jan 17 12:22:12.894812 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:22:12.895508 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:22:12.895511 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:22:12.900613 ignition[665]: Ignition 2.19.0 Jan 17 12:22:12.896151 systemd-networkd[763]: eth0: Link UP Jan 17 12:22:12.900619 ignition[665]: Stage: fetch-offline Jan 17 12:22:12.896154 systemd-networkd[763]: eth0: Gained carrier Jan 17 12:22:12.900650 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:12.896160 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:22:12.900657 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:22:12.896166 systemd[1]: Reached target network.target - Network. Jan 17 12:22:12.900796 ignition[665]: parsed url from cmdline: "" Jan 17 12:22:12.910423 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:22:12.900806 ignition[665]: no config URL provided Jan 17 12:22:12.900811 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:22:12.900818 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:22:12.900840 ignition[665]: op(1): [started] loading QEMU firmware config module Jan 17 12:22:12.900845 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:22:12.910226 ignition[665]: op(1): [finished] loading QEMU firmware config module Jan 17 12:22:12.952982 ignition[665]: parsing config with SHA512: 6953d4bac23920ad6701b390132394da5e83a0dd2aba825697dfb1c5bbdef66167ebd61277457f9b8c03eb4b1dc2b0b6088694ec997e6d46423806578fd47186 Jan 17 12:22:12.958188 unknown[665]: fetched base config from "system" Jan 17 12:22:12.958204 unknown[665]: fetched user config from "qemu" Jan 17 12:22:12.958787 ignition[665]: fetch-offline: fetch-offline passed Jan 17 12:22:12.960332 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:22:12.958937 ignition[665]: Ignition finished successfully Jan 17 12:22:12.961813 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:22:12.969515 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:22:12.979186 ignition[770]: Ignition 2.19.0 Jan 17 12:22:12.979196 ignition[770]: Stage: kargs Jan 17 12:22:12.979372 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:12.979382 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:22:12.980244 ignition[770]: kargs: kargs passed Jan 17 12:22:12.982598 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:22:12.980282 ignition[770]: Ignition finished successfully Jan 17 12:22:12.985000 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:22:12.997544 ignition[779]: Ignition 2.19.0 Jan 17 12:22:12.997552 ignition[779]: Stage: disks Jan 17 12:22:12.997709 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:12.997718 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:22:13.000131 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:22:12.998608 ignition[779]: disks: disks passed Jan 17 12:22:13.001513 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:22:12.998650 ignition[779]: Ignition finished successfully Jan 17 12:22:13.003304 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:22:13.005297 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:22:13.006823 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:22:13.008698 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:22:13.020526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:22:13.031520 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:22:13.037632 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:22:13.055445 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:22:13.101380 kernel: EXT4-fs (vda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:22:13.101875 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:22:13.103093 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:22:13.119427 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:22:13.121106 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:22:13.122319 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:22:13.122413 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:22:13.129933 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Jan 17 12:22:13.122465 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:22:13.135322 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:22:13.135342 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:22:13.135361 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:22:13.135373 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:22:13.126612 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:22:13.128574 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:22:13.136821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:22:13.174238 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:22:13.178362 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:22:13.181410 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:22:13.184230 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:22:13.254827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:22:13.267457 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:22:13.269889 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:22:13.275366 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:22:13.290896 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:22:13.292770 ignition[912]: INFO : Ignition 2.19.0 Jan 17 12:22:13.292770 ignition[912]: INFO : Stage: mount Jan 17 12:22:13.292770 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:13.292770 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:22:13.292770 ignition[912]: INFO : mount: mount passed Jan 17 12:22:13.292770 ignition[912]: INFO : Ignition finished successfully Jan 17 12:22:13.293674 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:22:13.305485 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:22:13.742629 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:22:13.751595 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:22:13.758090 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Jan 17 12:22:13.758126 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:22:13.758137 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:22:13.759774 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:22:13.762375 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:22:13.762942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:22:13.778150 ignition[943]: INFO : Ignition 2.19.0 Jan 17 12:22:13.778150 ignition[943]: INFO : Stage: files Jan 17 12:22:13.779713 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:13.779713 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:22:13.779713 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:22:13.782956 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:22:13.782956 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:22:13.786048 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:22:13.787406 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:22:13.787406 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:22:13.786609 unknown[943]: wrote ssh authorized keys file for user: core Jan 17 12:22:13.790952 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:22:13.790952 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:22:13.790952 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:22:13.790952 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:22:14.049418 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:22:14.324113 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:22:14.324113 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:22:14.327771 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 17 12:22:14.628832 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:22:14.798298 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:22:14.798298 ignition[943]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 17 12:22:14.801929 ignition[943]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:22:14.825750 ignition[943]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:22:14.828980 ignition[943]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:22:14.831468 ignition[943]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:22:14.831468 ignition[943]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:22:14.831468 ignition[943]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:22:14.831468 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:22:14.831468 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:22:14.831468 ignition[943]: INFO : files: files passed Jan 17 12:22:14.831468 ignition[943]: INFO : Ignition finished successfully Jan 17 12:22:14.833104 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:22:14.843498 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:22:14.845230 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:22:14.847417 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:22:14.849380 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:22:14.852504 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:22:14.853987 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:22:14.853987 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:22:14.856969 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:22:14.856095 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:22:14.858576 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:22:14.870586 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:22:14.888504 systemd-networkd[763]: eth0: Gained IPv6LL Jan 17 12:22:14.888790 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:22:14.888891 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:22:14.890611 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:22:14.891590 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:22:14.892643 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:22:14.893277 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:22:14.908091 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:22:14.916554 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:22:14.923608 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:22:14.924787 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:22:14.926760 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:22:14.928471 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:22:14.928573 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:22:14.931124 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:22:14.933094 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:22:14.934706 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:22:14.936380 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:22:14.938412 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:22:14.940364 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:22:14.942177 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:22:14.944100 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:22:14.946016 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:22:14.947713 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:22:14.949232 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:22:14.949336 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:22:14.951631 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:22:14.953476 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:22:14.955383 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:22:14.957243 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:22:14.958526 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:22:14.958632 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:22:14.961403 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:22:14.961516 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:22:14.963596 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:22:14.965187 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:22:14.966894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:22:14.968138 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:22:14.969824 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:22:14.971958 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:22:14.972036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:22:14.973521 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:22:14.973599 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:22:14.975136 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:22:14.975236 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:22:14.976897 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:22:14.976993 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:22:14.993503 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:22:14.994963 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:22:14.995987 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:22:14.996121 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:22:14.998000 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:22:14.998100 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:22:15.003568 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:22:15.004405 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:22:15.008393 ignition[998]: INFO : Ignition 2.19.0 Jan 17 12:22:15.008393 ignition[998]: INFO : Stage: umount Jan 17 12:22:15.008393 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:15.008393 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:22:15.008393 ignition[998]: INFO : umount: umount passed Jan 17 12:22:15.008393 ignition[998]: INFO : Ignition finished successfully Jan 17 12:22:15.008046 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:22:15.008129 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:22:15.009551 systemd[1]: Stopped target network.target - Network. Jan 17 12:22:15.010919 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:22:15.010969 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:22:15.012716 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:22:15.012761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:22:15.014686 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:22:15.014730 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:22:15.016664 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:22:15.016707 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:22:15.018456 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:22:15.020112 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:22:15.022624 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:22:15.032975 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:22:15.033074 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:22:15.033407 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 17 12:22:15.035221 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:22:15.035337 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:22:15.037893 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:22:15.037945 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:22:15.048515 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:22:15.049417 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:22:15.049473 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:22:15.051459 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:22:15.051503 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:22:15.053254 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:22:15.053297 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:22:15.055367 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:22:15.055420 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:22:15.057654 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:22:15.066705 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:22:15.066816 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:22:15.071157 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:22:15.071286 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:22:15.073595 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:22:15.073665 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:22:15.075789 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:22:15.075852 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:22:15.077181 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:22:15.077213 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:22:15.078867 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:22:15.078914 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:22:15.081583 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:22:15.081626 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:22:15.084175 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:22:15.084216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:22:15.086168 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:22:15.086208 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:22:15.098483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:22:15.099488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:22:15.099542 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:22:15.101594 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:22:15.101637 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:22:15.103581 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:22:15.103622 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:22:15.105721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:22:15.105764 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:15.107925 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:22:15.107997 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:22:15.110280 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:22:15.112384 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:22:15.121404 systemd[1]: Switching root. Jan 17 12:22:15.149031 systemd-journald[238]: Journal stopped Jan 17 12:22:15.856470 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 17 12:22:15.856540 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:22:15.856554 kernel: SELinux: policy capability open_perms=1 Jan 17 12:22:15.856565 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:22:15.856575 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:22:15.856586 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:22:15.856596 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:22:15.856607 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:22:15.856617 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:22:15.856627 kernel: audit: type=1403 audit(1737116535.324:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:22:15.856644 systemd[1]: Successfully loaded SELinux policy in 31.768ms. Jan 17 12:22:15.856664 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.800ms. Jan 17 12:22:15.856676 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:22:15.856688 systemd[1]: Detected virtualization kvm. Jan 17 12:22:15.856699 systemd[1]: Detected architecture arm64. Jan 17 12:22:15.856710 systemd[1]: Detected first boot. Jan 17 12:22:15.856721 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:22:15.856732 zram_generator::config[1060]: No configuration found. Jan 17 12:22:15.856747 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:22:15.856758 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:22:15.856770 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:22:15.856781 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:22:15.856792 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:22:15.856803 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:22:15.856814 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:22:15.856825 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:22:15.856836 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:22:15.856849 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:22:15.856861 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:22:15.856872 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:22:15.856883 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:22:15.856894 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:22:15.856905 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:22:15.856916 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:22:15.856927 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:22:15.856938 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 12:22:15.856951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:22:15.856962 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:22:15.856973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:22:15.856991 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:22:15.857002 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:22:15.857013 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:22:15.857026 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:22:15.857037 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:22:15.857049 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:22:15.857060 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:22:15.857071 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:22:15.857082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:22:15.857093 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:22:15.857104 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:22:15.857115 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:22:15.857126 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:22:15.857137 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:22:15.857149 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:22:15.857160 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:22:15.857171 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:22:15.857181 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:22:15.857193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:22:15.857203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:22:15.857215 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:22:15.857229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:22:15.857241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:22:15.857254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:22:15.857265 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:22:15.857281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:22:15.857292 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:22:15.857303 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:22:15.857315 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:22:15.857325 kernel: fuse: init (API version 7.39) Jan 17 12:22:15.857335 kernel: ACPI: bus type drm_connector registered Jan 17 12:22:15.857347 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:22:15.857367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:22:15.857378 kernel: loop: module loaded Jan 17 12:22:15.857389 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:22:15.857405 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:22:15.857417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:22:15.857445 systemd-journald[1145]: Collecting audit messages is disabled. Jan 17 12:22:15.857468 systemd-journald[1145]: Journal started Jan 17 12:22:15.857492 systemd-journald[1145]: Runtime Journal (/run/log/journal/34ef4af05e64410cb65f8cfd16d1db2d) is 5.9M, max 47.3M, 41.4M free. Jan 17 12:22:15.860244 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:22:15.861198 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:22:15.862543 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:22:15.863847 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:22:15.864953 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:22:15.866186 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:22:15.867459 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:22:15.868715 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:22:15.870178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:22:15.871676 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:22:15.871843 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:22:15.873314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:22:15.873502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:22:15.874847 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:22:15.875002 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:22:15.876314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:22:15.876499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:22:15.878128 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:22:15.878288 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:22:15.879632 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:22:15.879841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:22:15.881387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:22:15.882753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:22:15.884515 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:22:15.896517 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:22:15.905444 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:22:15.907503 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:22:15.908643 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:22:15.911507 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:22:15.913571 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:22:15.914806 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:22:15.915776 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:22:15.916848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:22:15.921435 systemd-journald[1145]: Time spent on flushing to /var/log/journal/34ef4af05e64410cb65f8cfd16d1db2d is 15.105ms for 844 entries. Jan 17 12:22:15.921435 systemd-journald[1145]: System Journal (/var/log/journal/34ef4af05e64410cb65f8cfd16d1db2d) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:22:15.941831 systemd-journald[1145]: Received client request to flush runtime journal. Jan 17 12:22:15.920586 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:22:15.923876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:22:15.926604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:22:15.928087 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:22:15.929437 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:22:15.930932 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:22:15.940860 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:22:15.946997 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:22:15.948959 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:22:15.950590 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:22:15.958840 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:22:15.958859 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:22:15.961595 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:22:15.962348 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:22:15.969573 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:22:15.989699 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:22:15.998594 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:22:16.009076 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 17 12:22:16.009098 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 17 12:22:16.012540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:22:16.323499 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:22:16.340557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:22:16.358672 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Jan 17 12:22:16.371945 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:22:16.384169 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:22:16.404526 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:22:16.406377 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1229) Jan 17 12:22:16.409869 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 17 12:22:16.459475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:22:16.462582 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:22:16.497569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:22:16.506726 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:22:16.509665 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:22:16.517081 systemd-networkd[1230]: lo: Link UP Jan 17 12:22:16.517089 systemd-networkd[1230]: lo: Gained carrier Jan 17 12:22:16.517790 systemd-networkd[1230]: Enumeration completed Jan 17 12:22:16.517898 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:22:16.519941 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:22:16.519950 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:22:16.520533 systemd-networkd[1230]: eth0: Link UP Jan 17 12:22:16.520543 systemd-networkd[1230]: eth0: Gained carrier Jan 17 12:22:16.520555 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:22:16.524099 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:22:16.525516 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:22:16.542444 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:22:16.545532 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:16.551781 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:22:16.553457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:22:16.570494 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:22:16.573780 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:22:16.611637 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:22:16.613042 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:22:16.614301 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:22:16.614336 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:22:16.615318 systemd[1]: Reached target machines.target - Containers. Jan 17 12:22:16.617229 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:22:16.637483 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:22:16.639720 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:22:16.640855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:22:16.641726 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:22:16.643860 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:22:16.648190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:22:16.650116 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:22:16.654432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:22:16.657435 kernel: loop0: detected capacity change from 0 to 114432 Jan 17 12:22:16.665145 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:22:16.666434 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:22:16.670647 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:22:16.698384 kernel: loop1: detected capacity change from 0 to 114328 Jan 17 12:22:16.736395 kernel: loop2: detected capacity change from 0 to 194512 Jan 17 12:22:16.780382 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 12:22:16.784377 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 12:22:16.789406 kernel: loop5: detected capacity change from 0 to 194512 Jan 17 12:22:16.792866 (sd-merge)[1289]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:22:16.793226 (sd-merge)[1289]: Merged extensions into '/usr'. Jan 17 12:22:16.797991 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:22:16.798010 systemd[1]: Reloading... Jan 17 12:22:16.843404 zram_generator::config[1316]: No configuration found. Jan 17 12:22:16.878505 ldconfig[1273]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:22:16.946308 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:16.988335 systemd[1]: Reloading finished in 189 ms. Jan 17 12:22:17.005107 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:22:17.006662 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:22:17.023521 systemd[1]: Starting ensure-sysext.service... Jan 17 12:22:17.025445 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:22:17.030481 systemd[1]: Reloading requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:22:17.030497 systemd[1]: Reloading... Jan 17 12:22:17.041672 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:22:17.041931 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:22:17.042569 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:22:17.042787 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 17 12:22:17.042830 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 17 12:22:17.044857 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:22:17.044871 systemd-tmpfiles[1359]: Skipping /boot Jan 17 12:22:17.051696 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:22:17.051711 systemd-tmpfiles[1359]: Skipping /boot Jan 17 12:22:17.076379 zram_generator::config[1384]: No configuration found. Jan 17 12:22:17.160182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:17.202275 systemd[1]: Reloading finished in 171 ms. Jan 17 12:22:17.218983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:22:17.234963 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:22:17.237341 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:22:17.239468 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:22:17.243537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:22:17.247649 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:22:17.252405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:22:17.253456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:22:17.258381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:22:17.263182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:22:17.265326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:22:17.268630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:22:17.268772 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:22:17.270821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:22:17.270946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:22:17.273911 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:22:17.274087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:22:17.280653 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:22:17.285902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:22:17.290600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:22:17.295583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:22:17.299709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:22:17.300790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:22:17.301625 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:22:17.305149 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:22:17.307046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:22:17.307193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:22:17.308764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:22:17.308897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:22:17.310599 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:22:17.310792 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:22:17.311095 augenrules[1472]: No rules Jan 17 12:22:17.314597 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:22:17.319327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:22:17.330482 systemd-resolved[1434]: Positive Trust Anchors: Jan 17 12:22:17.330502 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:22:17.330534 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:22:17.330579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:22:17.332648 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:22:17.335596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:22:17.336565 systemd-resolved[1434]: Defaulting to hostname 'linux'. Jan 17 12:22:17.338573 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:22:17.339713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:22:17.341616 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:22:17.342661 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:22:17.343875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:22:17.345654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:22:17.345790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:22:17.347486 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:22:17.347625 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:22:17.349092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:22:17.349233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:22:17.350894 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:22:17.351109 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:22:17.352841 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:22:17.355616 systemd[1]: Finished ensure-sysext.service. Jan 17 12:22:17.361533 systemd[1]: Reached target network.target - Network. Jan 17 12:22:17.362434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:22:17.363755 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:22:17.363811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:22:17.371502 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:22:17.410740 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:22:17.411464 systemd-timesyncd[1502]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:22:17.411511 systemd-timesyncd[1502]: Initial clock synchronization to Fri 2025-01-17 12:22:17.394503 UTC. Jan 17 12:22:17.412277 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:22:17.413447 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:22:17.414641 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:22:17.415856 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:22:17.417081 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:22:17.417116 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:22:17.418016 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:22:17.419145 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:22:17.420296 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:22:17.421531 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:22:17.422842 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:22:17.425204 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:22:17.427158 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:22:17.438276 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:22:17.439387 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:22:17.440320 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:22:17.441404 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:22:17.441449 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:22:17.441470 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:22:17.442470 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:22:17.444433 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:22:17.446221 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:22:17.454471 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:22:17.455512 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:22:17.459517 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:22:17.460379 jq[1508]: false Jan 17 12:22:17.461629 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:22:17.466561 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:22:17.470113 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:22:17.470404 extend-filesystems[1510]: Found loop3 Jan 17 12:22:17.470404 extend-filesystems[1510]: Found loop4 Jan 17 12:22:17.470404 extend-filesystems[1510]: Found loop5 Jan 17 12:22:17.470404 extend-filesystems[1510]: Found vda Jan 17 12:22:17.478654 extend-filesystems[1510]: Found vda1 Jan 17 12:22:17.478654 extend-filesystems[1510]: Found vda2 Jan 17 12:22:17.478654 extend-filesystems[1510]: Found vda3 Jan 17 12:22:17.478654 extend-filesystems[1510]: Found usr Jan 17 12:22:17.478654 extend-filesystems[1510]: Found vda4 Jan 17 12:22:17.478654 extend-filesystems[1510]: Found vda6 Jan 17 12:22:17.478654 extend-filesystems[1510]: Found vda7 Jan 17 12:22:17.478654 extend-filesystems[1510]: Found vda9 Jan 17 12:22:17.478654 extend-filesystems[1510]: Checking size of /dev/vda9 Jan 17 12:22:17.475654 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:22:17.481734 dbus-daemon[1507]: [system] SELinux support is enabled Jan 17 12:22:17.489404 extend-filesystems[1510]: Resized partition /dev/vda9 Jan 17 12:22:17.483729 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:22:17.491659 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:22:17.494001 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:22:17.497630 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:22:17.498688 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:22:17.503564 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1235) Jan 17 12:22:17.503590 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:22:17.508341 jq[1535]: true Jan 17 12:22:17.509772 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:22:17.510012 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:22:17.510252 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:22:17.510542 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:22:17.515425 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:22:17.515766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:22:17.523811 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:22:17.536306 jq[1540]: true Jan 17 12:22:17.538749 (ntainerd)[1548]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:22:17.545262 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:22:17.545262 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:22:17.545262 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:22:17.552405 extend-filesystems[1510]: Resized filesystem in /dev/vda9 Jan 17 12:22:17.546892 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:22:17.553317 update_engine[1534]: I20250117 12:22:17.545063 1534 main.cc:92] Flatcar Update Engine starting Jan 17 12:22:17.547098 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:22:17.555837 update_engine[1534]: I20250117 12:22:17.555042 1534 update_check_scheduler.cc:74] Next update check in 8m7s Jan 17 12:22:17.563850 tar[1538]: linux-arm64/helm Jan 17 12:22:17.561397 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:22:17.562709 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:22:17.562733 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:22:17.564589 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:22:17.566398 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:22:17.568119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:22:17.579522 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:22:17.584933 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 12:22:17.586815 systemd-logind[1520]: New seat seat0. Jan 17 12:22:17.587380 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:22:17.591518 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:22:17.593876 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:22:17.597293 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:22:17.624509 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:22:17.639505 systemd-networkd[1230]: eth0: Gained IPv6LL Jan 17 12:22:17.645923 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:22:17.650114 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:22:17.665602 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:22:17.672560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:17.675814 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:22:17.703648 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:22:17.703956 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:22:17.706956 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:22:17.727043 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:22:17.728736 containerd[1548]: time="2025-01-17T12:22:17.728360520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:22:17.753903 containerd[1548]: time="2025-01-17T12:22:17.753710040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:22:17.755422 containerd[1548]: time="2025-01-17T12:22:17.755176400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:22:17.755422 containerd[1548]: time="2025-01-17T12:22:17.755222760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:22:17.755422 containerd[1548]: time="2025-01-17T12:22:17.755243280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:22:17.756188 containerd[1548]: time="2025-01-17T12:22:17.756162960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:22:17.756258 containerd[1548]: time="2025-01-17T12:22:17.756245120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:22:17.756449 containerd[1548]: time="2025-01-17T12:22:17.756406800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:22:17.756517 containerd[1548]: time="2025-01-17T12:22:17.756503920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:22:17.756792 containerd[1548]: time="2025-01-17T12:22:17.756768640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.756842920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.756862800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.756873040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.756949080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.757128080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.757257200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.757271760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.757340360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:22:17.757552 containerd[1548]: time="2025-01-17T12:22:17.757427480Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:22:17.762890 containerd[1548]: time="2025-01-17T12:22:17.762868160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:22:17.762994 containerd[1548]: time="2025-01-17T12:22:17.762978520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:22:17.763047 containerd[1548]: time="2025-01-17T12:22:17.763036160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:22:17.763097 containerd[1548]: time="2025-01-17T12:22:17.763086520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:22:17.763151 containerd[1548]: time="2025-01-17T12:22:17.763139120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:22:17.763324 containerd[1548]: time="2025-01-17T12:22:17.763305560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:22:17.763789 containerd[1548]: time="2025-01-17T12:22:17.763765000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:22:17.763995 containerd[1548]: time="2025-01-17T12:22:17.763974240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:22:17.764061 containerd[1548]: time="2025-01-17T12:22:17.764049160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:22:17.764111 containerd[1548]: time="2025-01-17T12:22:17.764100800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:22:17.764163 containerd[1548]: time="2025-01-17T12:22:17.764152080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:22:17.764213 containerd[1548]: time="2025-01-17T12:22:17.764202320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:22:17.764261 containerd[1548]: time="2025-01-17T12:22:17.764250120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:22:17.764326 containerd[1548]: time="2025-01-17T12:22:17.764312400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:22:17.764427 containerd[1548]: time="2025-01-17T12:22:17.764411920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:22:17.764482 containerd[1548]: time="2025-01-17T12:22:17.764470680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:22:17.764534 containerd[1548]: time="2025-01-17T12:22:17.764522120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:22:17.764585 containerd[1548]: time="2025-01-17T12:22:17.764573680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:22:17.764660 containerd[1548]: time="2025-01-17T12:22:17.764646440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.764722 containerd[1548]: time="2025-01-17T12:22:17.764701640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.764834 containerd[1548]: time="2025-01-17T12:22:17.764819240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.764896 containerd[1548]: time="2025-01-17T12:22:17.764884080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.764947 containerd[1548]: time="2025-01-17T12:22:17.764935920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.764998 containerd[1548]: time="2025-01-17T12:22:17.764987400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765063 containerd[1548]: time="2025-01-17T12:22:17.765049200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765114 containerd[1548]: time="2025-01-17T12:22:17.765103360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765154040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765175400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765187880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765200640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765212920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765227680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765249520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765261880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.765409 containerd[1548]: time="2025-01-17T12:22:17.765273040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:22:17.765629 containerd[1548]: time="2025-01-17T12:22:17.765611320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:22:17.765818 containerd[1548]: time="2025-01-17T12:22:17.765801760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:22:17.765893 containerd[1548]: time="2025-01-17T12:22:17.765878240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:22:17.766379 containerd[1548]: time="2025-01-17T12:22:17.765934880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:22:17.766379 containerd[1548]: time="2025-01-17T12:22:17.765949120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.766379 containerd[1548]: time="2025-01-17T12:22:17.765961960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:22:17.766379 containerd[1548]: time="2025-01-17T12:22:17.765972960Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:22:17.766379 containerd[1548]: time="2025-01-17T12:22:17.765990600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:22:17.766513 containerd[1548]: time="2025-01-17T12:22:17.766239760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:22:17.766513 containerd[1548]: time="2025-01-17T12:22:17.766301400Z" level=info msg="Connect containerd service" Jan 17 12:22:17.766513 containerd[1548]: time="2025-01-17T12:22:17.766329800Z" level=info msg="using legacy CRI server" Jan 17 12:22:17.766513 containerd[1548]: time="2025-01-17T12:22:17.766337000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:22:17.766747 containerd[1548]: time="2025-01-17T12:22:17.766726880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:22:17.767381 containerd[1548]: time="2025-01-17T12:22:17.767329440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:22:17.767971 containerd[1548]: time="2025-01-17T12:22:17.767949480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:22:17.768080 containerd[1548]: time="2025-01-17T12:22:17.768067480Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:22:17.768452 containerd[1548]: time="2025-01-17T12:22:17.768398120Z" level=info msg="Start subscribing containerd event" Jan 17 12:22:17.768512 containerd[1548]: time="2025-01-17T12:22:17.768472680Z" level=info msg="Start recovering state" Jan 17 12:22:17.768646 containerd[1548]: time="2025-01-17T12:22:17.768542600Z" level=info msg="Start event monitor" Jan 17 12:22:17.768646 containerd[1548]: time="2025-01-17T12:22:17.768558240Z" level=info msg="Start snapshots syncer" Jan 17 12:22:17.768646 containerd[1548]: time="2025-01-17T12:22:17.768572720Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:22:17.768646 containerd[1548]: time="2025-01-17T12:22:17.768580720Z" level=info msg="Start streaming server" Jan 17 12:22:17.769275 containerd[1548]: time="2025-01-17T12:22:17.768728800Z" level=info msg="containerd successfully booted in 0.041080s" Jan 17 12:22:17.768828 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:22:17.944112 tar[1538]: linux-arm64/LICENSE Jan 17 12:22:17.944281 tar[1538]: linux-arm64/README.md Jan 17 12:22:17.962146 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:22:17.984132 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:22:18.002841 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:22:18.017741 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:22:18.023556 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:22:18.023790 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:22:18.026526 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:22:18.039658 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:22:18.042426 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:22:18.044643 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 12:22:18.045988 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:22:18.169806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:18.171479 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:22:18.173219 systemd[1]: Startup finished in 5.168s (kernel) + 2.884s (userspace) = 8.052s. Jan 17 12:22:18.173678 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:22:18.635081 kubelet[1644]: E0117 12:22:18.634952 1644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:22:18.637561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:22:18.637762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:22:23.647504 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:22:23.658610 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:38162.service - OpenSSH per-connection server daemon (10.0.0.1:38162). Jan 17 12:22:23.705187 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 38162 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:22:23.706892 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:23.713777 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:22:23.728535 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:22:23.730339 systemd-logind[1520]: New session 1 of user core. Jan 17 12:22:23.736882 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:22:23.738694 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:22:23.744594 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:22:23.828997 systemd[1664]: Queued start job for default target default.target. Jan 17 12:22:23.829315 systemd[1664]: Created slice app.slice - User Application Slice. Jan 17 12:22:23.829338 systemd[1664]: Reached target paths.target - Paths. Jan 17 12:22:23.829369 systemd[1664]: Reached target timers.target - Timers. Jan 17 12:22:23.835477 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:22:23.840572 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:22:23.840626 systemd[1664]: Reached target sockets.target - Sockets. Jan 17 12:22:23.840637 systemd[1664]: Reached target basic.target - Basic System. Jan 17 12:22:23.840671 systemd[1664]: Reached target default.target - Main User Target. Jan 17 12:22:23.840692 systemd[1664]: Startup finished in 89ms. Jan 17 12:22:23.840998 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:22:23.842511 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:22:23.895633 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:38164.service - OpenSSH per-connection server daemon (10.0.0.1:38164). Jan 17 12:22:23.929172 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 38164 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:22:23.930227 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:23.934163 systemd-logind[1520]: New session 2 of user core. Jan 17 12:22:23.943614 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:22:23.994168 sshd[1676]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:24.005566 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). Jan 17 12:22:24.005896 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:38164.service: Deactivated successfully. Jan 17 12:22:24.008192 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:22:24.008476 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:22:24.009589 systemd-logind[1520]: Removed session 2. Jan 17 12:22:24.032856 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:22:24.034009 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:24.037239 systemd-logind[1520]: New session 3 of user core. Jan 17 12:22:24.049624 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:22:24.096281 sshd[1681]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:24.110562 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:38170.service - OpenSSH per-connection server daemon (10.0.0.1:38170). Jan 17 12:22:24.110965 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:38168.service: Deactivated successfully. Jan 17 12:22:24.112570 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:22:24.113093 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:22:24.114274 systemd-logind[1520]: Removed session 3. Jan 17 12:22:24.137185 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 38170 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:22:24.138256 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:24.141774 systemd-logind[1520]: New session 4 of user core. Jan 17 12:22:24.147571 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:22:24.197929 sshd[1689]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:24.207768 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:38184.service - OpenSSH per-connection server daemon (10.0.0.1:38184). Jan 17 12:22:24.208212 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:38170.service: Deactivated successfully. Jan 17 12:22:24.209402 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:22:24.209976 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:22:24.211211 systemd-logind[1520]: Removed session 4. Jan 17 12:22:24.234640 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 38184 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:22:24.235676 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:24.239303 systemd-logind[1520]: New session 5 of user core. Jan 17 12:22:24.245581 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:22:24.307950 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:22:24.308193 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:24.319114 sudo[1704]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:24.320783 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:24.329570 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:38186.service - OpenSSH per-connection server daemon (10.0.0.1:38186). Jan 17 12:22:24.329929 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:38184.service: Deactivated successfully. Jan 17 12:22:24.331488 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:22:24.332047 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:22:24.333639 systemd-logind[1520]: Removed session 5. Jan 17 12:22:24.357133 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 38186 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:22:24.358200 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:24.361968 systemd-logind[1520]: New session 6 of user core. Jan 17 12:22:24.373565 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:22:24.424243 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:22:24.424897 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:24.427897 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:24.432179 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:22:24.432452 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:24.451665 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:22:24.454022 auditctl[1717]: No rules Jan 17 12:22:24.454849 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:22:24.455088 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:22:24.456760 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:22:24.479395 augenrules[1736]: No rules Jan 17 12:22:24.480668 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:22:24.481812 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:24.483371 sshd[1706]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:24.489572 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:38196.service - OpenSSH per-connection server daemon (10.0.0.1:38196). Jan 17 12:22:24.489934 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:38186.service: Deactivated successfully. Jan 17 12:22:24.491915 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:22:24.493057 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:22:24.494227 systemd-logind[1520]: Removed session 6. Jan 17 12:22:24.519021 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 38196 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:22:24.520243 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:24.524421 systemd-logind[1520]: New session 7 of user core. Jan 17 12:22:24.530648 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:22:24.581745 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:22:24.582009 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:24.890655 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:22:24.890724 (dockerd)[1767]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:22:25.145452 dockerd[1767]: time="2025-01-17T12:22:25.145086044Z" level=info msg="Starting up" Jan 17 12:22:25.208864 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2009169405-merged.mount: Deactivated successfully. Jan 17 12:22:25.373723 dockerd[1767]: time="2025-01-17T12:22:25.373669783Z" level=info msg="Loading containers: start." Jan 17 12:22:25.451323 kernel: Initializing XFRM netlink socket Jan 17 12:22:25.510129 systemd-networkd[1230]: docker0: Link UP Jan 17 12:22:25.529508 dockerd[1767]: time="2025-01-17T12:22:25.529478016Z" level=info msg="Loading containers: done." Jan 17 12:22:25.541095 dockerd[1767]: time="2025-01-17T12:22:25.541040841Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:22:25.541216 dockerd[1767]: time="2025-01-17T12:22:25.541134298Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:22:25.541270 dockerd[1767]: time="2025-01-17T12:22:25.541250060Z" level=info msg="Daemon has completed initialization" Jan 17 12:22:25.566842 dockerd[1767]: time="2025-01-17T12:22:25.566716976Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:22:25.566940 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:22:26.206911 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3169434906-merged.mount: Deactivated successfully. Jan 17 12:22:26.218714 containerd[1548]: time="2025-01-17T12:22:26.218666141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:22:26.895018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3828271041.mount: Deactivated successfully. Jan 17 12:22:28.073066 containerd[1548]: time="2025-01-17T12:22:28.073020973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:28.074005 containerd[1548]: time="2025-01-17T12:22:28.073566630Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=32202459" Jan 17 12:22:28.074705 containerd[1548]: time="2025-01-17T12:22:28.074657706Z" level=info msg="ImageCreate event name:\"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:28.078136 containerd[1548]: time="2025-01-17T12:22:28.078090764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:28.078856 containerd[1548]: time="2025-01-17T12:22:28.078820800Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"32199257\" in 1.860097573s" Jan 17 12:22:28.078917 containerd[1548]: time="2025-01-17T12:22:28.078859019Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\"" Jan 17 12:22:28.096800 containerd[1548]: time="2025-01-17T12:22:28.096622218Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:22:28.887979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:22:28.897523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:28.985034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:28.988785 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:22:29.028990 kubelet[1999]: E0117 12:22:29.028932 1999 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:22:29.032509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:22:29.032697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:22:29.700863 containerd[1548]: time="2025-01-17T12:22:29.700760185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.701774 containerd[1548]: time="2025-01-17T12:22:29.701222784Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=29381104" Jan 17 12:22:29.702395 containerd[1548]: time="2025-01-17T12:22:29.702347240Z" level=info msg="ImageCreate event name:\"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.706075 containerd[1548]: time="2025-01-17T12:22:29.706041162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.707067 containerd[1548]: time="2025-01-17T12:22:29.707013777Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"30784892\" in 1.610357857s" Jan 17 12:22:29.707461 containerd[1548]: time="2025-01-17T12:22:29.707439276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\"" Jan 17 12:22:29.725545 containerd[1548]: time="2025-01-17T12:22:29.725515448Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:22:30.795136 containerd[1548]: time="2025-01-17T12:22:30.795087179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:30.796171 containerd[1548]: time="2025-01-17T12:22:30.796142225Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=15765674" Jan 17 12:22:30.797020 containerd[1548]: time="2025-01-17T12:22:30.796993731Z" level=info msg="ImageCreate event name:\"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:30.801307 containerd[1548]: time="2025-01-17T12:22:30.801249779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:30.803179 containerd[1548]: time="2025-01-17T12:22:30.803033510Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"17169480\" in 1.077480641s" Jan 17 12:22:30.803179 containerd[1548]: time="2025-01-17T12:22:30.803073371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\"" Jan 17 12:22:30.821126 containerd[1548]: time="2025-01-17T12:22:30.821097316Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:22:31.829485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592329162.mount: Deactivated successfully. Jan 17 12:22:32.136987 containerd[1548]: time="2025-01-17T12:22:32.136866924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:32.137787 containerd[1548]: time="2025-01-17T12:22:32.137677697Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=25274684" Jan 17 12:22:32.138446 containerd[1548]: time="2025-01-17T12:22:32.138419459Z" level=info msg="ImageCreate event name:\"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:32.140823 containerd[1548]: time="2025-01-17T12:22:32.140789325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:32.141524 containerd[1548]: time="2025-01-17T12:22:32.141492504Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"25273701\" in 1.320261094s" Jan 17 12:22:32.141524 containerd[1548]: time="2025-01-17T12:22:32.141526010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\"" Jan 17 12:22:32.159716 containerd[1548]: time="2025-01-17T12:22:32.159630703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:22:32.751946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001872502.mount: Deactivated successfully. Jan 17 12:22:33.315893 containerd[1548]: time="2025-01-17T12:22:33.315842393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:33.316255 containerd[1548]: time="2025-01-17T12:22:33.316164504Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 17 12:22:33.317002 containerd[1548]: time="2025-01-17T12:22:33.316965463Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:33.319986 containerd[1548]: time="2025-01-17T12:22:33.319948586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:33.321204 containerd[1548]: time="2025-01-17T12:22:33.321170336Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.161508805s" Jan 17 12:22:33.321237 containerd[1548]: time="2025-01-17T12:22:33.321206122Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:22:33.340307 containerd[1548]: time="2025-01-17T12:22:33.340258799Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:22:33.764519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796053808.mount: Deactivated successfully. Jan 17 12:22:33.769134 containerd[1548]: time="2025-01-17T12:22:33.769082625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:33.769781 containerd[1548]: time="2025-01-17T12:22:33.769737842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 17 12:22:33.770428 containerd[1548]: time="2025-01-17T12:22:33.770346758Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:33.772388 containerd[1548]: time="2025-01-17T12:22:33.772340678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:33.773468 containerd[1548]: time="2025-01-17T12:22:33.773432680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 433.114025ms" Jan 17 12:22:33.773468 containerd[1548]: time="2025-01-17T12:22:33.773465067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 12:22:33.791899 containerd[1548]: time="2025-01-17T12:22:33.791873643Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:22:34.321625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181985772.mount: Deactivated successfully. Jan 17 12:22:36.007146 containerd[1548]: time="2025-01-17T12:22:36.007087999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:36.008295 containerd[1548]: time="2025-01-17T12:22:36.008260612Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 17 12:22:36.009231 containerd[1548]: time="2025-01-17T12:22:36.009173510Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:36.014372 containerd[1548]: time="2025-01-17T12:22:36.012160443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:36.014372 containerd[1548]: time="2025-01-17T12:22:36.014232998Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.222327847s" Jan 17 12:22:36.014372 containerd[1548]: time="2025-01-17T12:22:36.014261708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 17 12:22:39.282991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:22:39.290502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:39.491809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:39.495475 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:22:39.537282 kubelet[2235]: E0117 12:22:39.537149 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:22:39.540346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:22:39.540551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:22:40.407053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:40.416618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:40.431584 systemd[1]: Reloading requested from client PID 2252 ('systemctl') (unit session-7.scope)... Jan 17 12:22:40.431603 systemd[1]: Reloading... Jan 17 12:22:40.496388 zram_generator::config[2295]: No configuration found. Jan 17 12:22:40.659340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:40.707782 systemd[1]: Reloading finished in 275 ms. Jan 17 12:22:40.741256 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:22:40.741335 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:22:40.741608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:40.743798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:40.831995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:40.836253 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:22:40.874886 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:40.874886 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:22:40.874886 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:40.875243 kubelet[2349]: I0117 12:22:40.874926 2349 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:22:41.955482 kubelet[2349]: I0117 12:22:41.955440 2349 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:22:41.955482 kubelet[2349]: I0117 12:22:41.955471 2349 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:22:41.955864 kubelet[2349]: I0117 12:22:41.955669 2349 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:22:41.989016 kubelet[2349]: E0117 12:22:41.988976 2349 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:41.989187 kubelet[2349]: I0117 12:22:41.988973 2349 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:41.999975 kubelet[2349]: I0117 12:22:41.999949 2349 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:22:42.001121 kubelet[2349]: I0117 12:22:42.001096 2349 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:22:42.001293 kubelet[2349]: I0117 12:22:42.001278 2349 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:22:42.001382 kubelet[2349]: I0117 12:22:42.001302 2349 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:22:42.001382 kubelet[2349]: I0117 12:22:42.001311 2349 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:22:42.002388 kubelet[2349]: I0117 12:22:42.002342 2349 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:42.004400 kubelet[2349]: I0117 12:22:42.004372 2349 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:22:42.004400 kubelet[2349]: I0117 12:22:42.004398 2349 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:22:42.004483 kubelet[2349]: I0117 12:22:42.004420 2349 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:22:42.004483 kubelet[2349]: I0117 12:22:42.004434 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:22:42.005373 kubelet[2349]: W0117 12:22:42.005282 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:42.005373 kubelet[2349]: E0117 12:22:42.005335 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:42.006181 kubelet[2349]: W0117 12:22:42.006106 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:42.006181 kubelet[2349]: E0117 12:22:42.006163 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:42.009720 kubelet[2349]: I0117 12:22:42.009699 2349 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:22:42.010196 kubelet[2349]: I0117 12:22:42.010183 2349 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:22:42.010716 kubelet[2349]: W0117 12:22:42.010688 2349 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:22:42.011480 kubelet[2349]: I0117 12:22:42.011463 2349 server.go:1256] "Started kubelet" Jan 17 12:22:42.012570 kubelet[2349]: I0117 12:22:42.011583 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:22:42.012570 kubelet[2349]: I0117 12:22:42.011889 2349 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:22:42.012570 kubelet[2349]: I0117 12:22:42.011949 2349 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:22:42.013002 kubelet[2349]: I0117 12:22:42.012974 2349 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:22:42.013703 kubelet[2349]: I0117 12:22:42.013621 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:22:42.015492 kubelet[2349]: I0117 12:22:42.015475 2349 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:22:42.015664 kubelet[2349]: I0117 12:22:42.015650 2349 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:22:42.015768 kubelet[2349]: E0117 12:22:42.015735 2349 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b7a4f3c23c012 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:22:42.011439122 +0000 UTC m=+1.171635375,LastTimestamp:2025-01-17 12:22:42.011439122 +0000 UTC m=+1.171635375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:22:42.015768 kubelet[2349]: E0117 12:22:42.015753 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Jan 17 12:22:42.015768 kubelet[2349]: I0117 12:22:42.015747 2349 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:22:42.016325 kubelet[2349]: W0117 12:22:42.016278 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:42.016325 kubelet[2349]: E0117 12:22:42.016327 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:42.016436 kubelet[2349]: I0117 12:22:42.016379 2349 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:22:42.016462 kubelet[2349]: I0117 12:22:42.016453 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:22:42.016902 kubelet[2349]: E0117 12:22:42.016886 2349 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:22:42.018396 kubelet[2349]: I0117 12:22:42.017282 2349 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:22:42.026844 kubelet[2349]: I0117 12:22:42.026744 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:22:42.027954 kubelet[2349]: I0117 12:22:42.027641 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:22:42.027954 kubelet[2349]: I0117 12:22:42.027660 2349 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:22:42.027954 kubelet[2349]: I0117 12:22:42.027675 2349 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:22:42.027954 kubelet[2349]: E0117 12:22:42.027720 2349 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:22:42.033519 kubelet[2349]: W0117 12:22:42.033481 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:42.033519 kubelet[2349]: E0117 12:22:42.033522 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:42.034435 kubelet[2349]: I0117 12:22:42.034399 2349 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:22:42.034435 kubelet[2349]: I0117 12:22:42.034426 2349 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:22:42.034519 kubelet[2349]: I0117 12:22:42.034457 2349 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:42.036304 kubelet[2349]: I0117 12:22:42.036277 2349 policy_none.go:49] "None policy: Start" Jan 17 12:22:42.036902 kubelet[2349]: I0117 12:22:42.036878 2349 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:22:42.036965 kubelet[2349]: I0117 12:22:42.036952 2349 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:22:42.041149 kubelet[2349]: I0117 12:22:42.041045 2349 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:22:42.041808 kubelet[2349]: I0117 12:22:42.041779 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:22:42.043663 kubelet[2349]: E0117 12:22:42.043644 2349 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:22:42.116875 kubelet[2349]: I0117 12:22:42.116848 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:22:42.117319 kubelet[2349]: E0117 12:22:42.117291 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jan 17 12:22:42.128492 kubelet[2349]: I0117 12:22:42.128454 2349 topology_manager.go:215] "Topology Admit Handler" podUID="403191808deda6abddccc06e53df2d12" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:22:42.129376 kubelet[2349]: I0117 12:22:42.129343 2349 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:22:42.132106 kubelet[2349]: I0117 12:22:42.132033 2349 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:22:42.217107 kubelet[2349]: E0117 12:22:42.217010 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Jan 17 12:22:42.317536 kubelet[2349]: I0117 12:22:42.317343 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:42.317536 kubelet[2349]: I0117 12:22:42.317394 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:42.317536 kubelet[2349]: I0117 12:22:42.317414 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:42.317536 kubelet[2349]: I0117 12:22:42.317438 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/403191808deda6abddccc06e53df2d12-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"403191808deda6abddccc06e53df2d12\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:22:42.317536 kubelet[2349]: I0117 12:22:42.317458 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/403191808deda6abddccc06e53df2d12-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"403191808deda6abddccc06e53df2d12\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:22:42.317704 kubelet[2349]: I0117 12:22:42.317476 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:42.317704 kubelet[2349]: I0117 12:22:42.317499 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:42.317704 kubelet[2349]: I0117 12:22:42.317573 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:22:42.317704 kubelet[2349]: I0117 12:22:42.317642 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/403191808deda6abddccc06e53df2d12-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"403191808deda6abddccc06e53df2d12\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:22:42.318330 kubelet[2349]: I0117 12:22:42.318307 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:22:42.318626 kubelet[2349]: E0117 12:22:42.318609 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jan 17 12:22:42.434134 kubelet[2349]: E0117 12:22:42.434105 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:42.434679 containerd[1548]: time="2025-01-17T12:22:42.434641931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:403191808deda6abddccc06e53df2d12,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:42.436832 kubelet[2349]: E0117 12:22:42.436811 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:42.436907 kubelet[2349]: E0117 12:22:42.436881 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:42.437156 containerd[1548]: time="2025-01-17T12:22:42.437123934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:42.437443 containerd[1548]: time="2025-01-17T12:22:42.437159126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:42.617943 kubelet[2349]: E0117 12:22:42.617859 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Jan 17 12:22:42.720296 kubelet[2349]: I0117 12:22:42.720268 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:22:42.720609 kubelet[2349]: E0117 12:22:42.720593 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jan 17 12:22:42.934481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047338745.mount: Deactivated successfully. Jan 17 12:22:42.938066 containerd[1548]: time="2025-01-17T12:22:42.938018672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:42.939257 containerd[1548]: time="2025-01-17T12:22:42.939217403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:22:42.939837 containerd[1548]: time="2025-01-17T12:22:42.939804112Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:42.940657 containerd[1548]: time="2025-01-17T12:22:42.940633525Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:42.942716 containerd[1548]: time="2025-01-17T12:22:42.941346605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:42.943146 containerd[1548]: time="2025-01-17T12:22:42.943109850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 17 12:22:42.944276 containerd[1548]: time="2025-01-17T12:22:42.944243556Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:22:42.946188 containerd[1548]: time="2025-01-17T12:22:42.946158286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:42.947867 containerd[1548]: time="2025-01-17T12:22:42.947840788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.112317ms" Jan 17 12:22:42.949131 containerd[1548]: time="2025-01-17T12:22:42.949099786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.759981ms" Jan 17 12:22:42.951679 containerd[1548]: time="2025-01-17T12:22:42.951644775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 514.358958ms" Jan 17 12:22:43.014059 kubelet[2349]: W0117 12:22:43.013997 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:43.014059 kubelet[2349]: E0117 12:22:43.014058 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:43.070715 kubelet[2349]: W0117 12:22:43.067953 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:43.070715 kubelet[2349]: E0117 12:22:43.068008 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:43.095301 containerd[1548]: time="2025-01-17T12:22:43.095225507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:43.095942 containerd[1548]: time="2025-01-17T12:22:43.095905764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:43.096085 containerd[1548]: time="2025-01-17T12:22:43.096061851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.096213 containerd[1548]: time="2025-01-17T12:22:43.096151193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:43.096366 containerd[1548]: time="2025-01-17T12:22:43.096201142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:43.096366 containerd[1548]: time="2025-01-17T12:22:43.096216459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.096487 containerd[1548]: time="2025-01-17T12:22:43.096277966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:43.096487 containerd[1548]: time="2025-01-17T12:22:43.096388903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:43.096487 containerd[1548]: time="2025-01-17T12:22:43.096403819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.096946 containerd[1548]: time="2025-01-17T12:22:43.096895556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.097330 containerd[1548]: time="2025-01-17T12:22:43.097262159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.097474 containerd[1548]: time="2025-01-17T12:22:43.097106672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.142475 containerd[1548]: time="2025-01-17T12:22:43.142329320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,} returns sandbox id \"445e95d2735e7d2ab8a20f119ea642d46c68b7d160c9f56cb3c2eadbd410281a\"" Jan 17 12:22:43.144319 containerd[1548]: time="2025-01-17T12:22:43.144284108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9855c2b2b30465ca378106c9b6a19422f66bd79ab3e7a0df76097cb83c1f3dd0\"" Jan 17 12:22:43.144920 kubelet[2349]: E0117 12:22:43.144876 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:43.145489 kubelet[2349]: E0117 12:22:43.145464 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:43.148053 containerd[1548]: time="2025-01-17T12:22:43.147885031Z" level=info msg="CreateContainer within sandbox \"445e95d2735e7d2ab8a20f119ea642d46c68b7d160c9f56cb3c2eadbd410281a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:22:43.148053 containerd[1548]: time="2025-01-17T12:22:43.147950017Z" level=info msg="CreateContainer within sandbox \"9855c2b2b30465ca378106c9b6a19422f66bd79ab3e7a0df76097cb83c1f3dd0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:22:43.153659 containerd[1548]: time="2025-01-17T12:22:43.153550959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:403191808deda6abddccc06e53df2d12,Namespace:kube-system,Attempt:0,} returns sandbox id \"412967c3b8c96202b8d58b28c1e83378a720b8e24b81659b03766254418f2485\"" Jan 17 12:22:43.154233 kubelet[2349]: E0117 12:22:43.154179 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:43.156475 containerd[1548]: time="2025-01-17T12:22:43.156345851Z" level=info msg="CreateContainer within sandbox \"412967c3b8c96202b8d58b28c1e83378a720b8e24b81659b03766254418f2485\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:22:43.162157 containerd[1548]: time="2025-01-17T12:22:43.162123156Z" level=info msg="CreateContainer within sandbox \"9855c2b2b30465ca378106c9b6a19422f66bd79ab3e7a0df76097cb83c1f3dd0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f444ab6e29fc7cc3f89010932ebbc111dc38d86b7e58dcc03ab709f6c391ccf7\"" Jan 17 12:22:43.162936 containerd[1548]: time="2025-01-17T12:22:43.162910471Z" level=info msg="StartContainer for \"f444ab6e29fc7cc3f89010932ebbc111dc38d86b7e58dcc03ab709f6c391ccf7\"" Jan 17 12:22:43.163979 containerd[1548]: time="2025-01-17T12:22:43.163946093Z" level=info msg="CreateContainer within sandbox \"445e95d2735e7d2ab8a20f119ea642d46c68b7d160c9f56cb3c2eadbd410281a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cce287e27d466aaec85dc0363d7774784703a824f435ad60cefd9f7dd425f69c\"" Jan 17 12:22:43.164756 containerd[1548]: time="2025-01-17T12:22:43.164510934Z" level=info msg="StartContainer for \"cce287e27d466aaec85dc0363d7774784703a824f435ad60cefd9f7dd425f69c\"" Jan 17 12:22:43.172662 containerd[1548]: time="2025-01-17T12:22:43.172624827Z" level=info msg="CreateContainer within sandbox \"412967c3b8c96202b8d58b28c1e83378a720b8e24b81659b03766254418f2485\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa5487da6b6e07bc83160f4dcf6f46cd0f2ca13d1b0720ae30089f52e6b368da\"" Jan 17 12:22:43.173203 containerd[1548]: time="2025-01-17T12:22:43.173173152Z" level=info msg="StartContainer for \"aa5487da6b6e07bc83160f4dcf6f46cd0f2ca13d1b0720ae30089f52e6b368da\"" Jan 17 12:22:43.219609 containerd[1548]: time="2025-01-17T12:22:43.219080976Z" level=info msg="StartContainer for \"cce287e27d466aaec85dc0363d7774784703a824f435ad60cefd9f7dd425f69c\" returns successfully" Jan 17 12:22:43.226738 containerd[1548]: time="2025-01-17T12:22:43.226649064Z" level=info msg="StartContainer for \"f444ab6e29fc7cc3f89010932ebbc111dc38d86b7e58dcc03ab709f6c391ccf7\" returns successfully" Jan 17 12:22:43.244888 containerd[1548]: time="2025-01-17T12:22:43.244853155Z" level=info msg="StartContainer for \"aa5487da6b6e07bc83160f4dcf6f46cd0f2ca13d1b0720ae30089f52e6b368da\" returns successfully" Jan 17 12:22:43.247795 kubelet[2349]: W0117 12:22:43.247701 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:43.247795 kubelet[2349]: E0117 12:22:43.247770 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:43.362873 kubelet[2349]: W0117 12:22:43.362810 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:43.362873 kubelet[2349]: E0117 12:22:43.362882 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 17 12:22:43.418985 kubelet[2349]: E0117 12:22:43.418929 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Jan 17 12:22:43.522177 kubelet[2349]: I0117 12:22:43.522071 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:22:44.041973 kubelet[2349]: E0117 12:22:44.041367 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:44.045981 kubelet[2349]: E0117 12:22:44.045952 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:44.047870 kubelet[2349]: E0117 12:22:44.047798 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:45.050513 kubelet[2349]: E0117 12:22:45.049841 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:45.151373 kubelet[2349]: E0117 12:22:45.151307 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:22:45.232771 kubelet[2349]: I0117 12:22:45.232722 2349 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:22:45.243609 kubelet[2349]: E0117 12:22:45.243567 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:22:45.344881 kubelet[2349]: E0117 12:22:45.344526 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:22:45.445019 kubelet[2349]: E0117 12:22:45.444988 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:22:46.007460 kubelet[2349]: I0117 12:22:46.007408 2349 apiserver.go:52] "Watching apiserver" Jan 17 12:22:46.016622 kubelet[2349]: I0117 12:22:46.016582 2349 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:46.438499 kubelet[2349]: E0117 12:22:46.438401 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:47.050987 kubelet[2349]: E0117 12:22:47.050942 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:48.025118 systemd[1]: Reloading requested from client PID 2626 ('systemctl') (unit session-7.scope)... Jan 17 12:22:48.025134 systemd[1]: Reloading... Jan 17 12:22:48.084385 zram_generator::config[2670]: No configuration found. Jan 17 12:22:48.165383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:48.220083 systemd[1]: Reloading finished in 194 ms. Jan 17 12:22:48.252116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:48.269201 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:22:48.269543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:48.284826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:48.378082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:48.382650 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:22:48.427861 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:48.427861 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:22:48.427861 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:48.428503 kubelet[2717]: I0117 12:22:48.427901 2717 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:22:48.432963 kubelet[2717]: I0117 12:22:48.432568 2717 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:22:48.432963 kubelet[2717]: I0117 12:22:48.432588 2717 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:22:48.434216 kubelet[2717]: I0117 12:22:48.433520 2717 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:22:48.435134 kubelet[2717]: I0117 12:22:48.435103 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:22:48.436959 kubelet[2717]: I0117 12:22:48.436927 2717 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:48.444083 kubelet[2717]: I0117 12:22:48.444058 2717 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:22:48.444523 kubelet[2717]: I0117 12:22:48.444499 2717 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:22:48.444721 kubelet[2717]: I0117 12:22:48.444654 2717 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:22:48.444815 kubelet[2717]: I0117 12:22:48.444722 2717 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:22:48.444815 kubelet[2717]: I0117 12:22:48.444733 2717 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:22:48.444815 kubelet[2717]: I0117 12:22:48.444773 2717 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:48.444892 kubelet[2717]: I0117 12:22:48.444866 2717 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:22:48.444892 kubelet[2717]: I0117 12:22:48.444881 2717 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:22:48.444930 kubelet[2717]: I0117 12:22:48.444901 2717 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:22:48.444930 kubelet[2717]: I0117 12:22:48.444911 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:22:48.446419 kubelet[2717]: I0117 12:22:48.445780 2717 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:22:48.446547 kubelet[2717]: I0117 12:22:48.446488 2717 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:22:48.446872 kubelet[2717]: I0117 12:22:48.446856 2717 server.go:1256] "Started kubelet" Jan 17 12:22:48.447444 kubelet[2717]: I0117 12:22:48.447419 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:22:48.447690 kubelet[2717]: I0117 12:22:48.447605 2717 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:22:48.447690 kubelet[2717]: I0117 12:22:48.447656 2717 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:22:48.448186 kubelet[2717]: I0117 12:22:48.448165 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:22:48.448999 kubelet[2717]: I0117 12:22:48.448315 2717 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:22:48.462102 kubelet[2717]: I0117 12:22:48.461440 2717 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:22:48.463078 kubelet[2717]: I0117 12:22:48.463053 2717 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:22:48.463287 kubelet[2717]: I0117 12:22:48.463268 2717 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:22:48.466328 kubelet[2717]: I0117 12:22:48.466305 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:22:48.469151 kubelet[2717]: I0117 12:22:48.469124 2717 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:22:48.470550 kubelet[2717]: I0117 12:22:48.470507 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:22:48.471138 kubelet[2717]: E0117 12:22:48.471120 2717 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:22:48.471871 kubelet[2717]: I0117 12:22:48.469332 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:22:48.471969 kubelet[2717]: I0117 12:22:48.471958 2717 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:22:48.472027 kubelet[2717]: I0117 12:22:48.472020 2717 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:22:48.472121 kubelet[2717]: E0117 12:22:48.472111 2717 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:22:48.474870 kubelet[2717]: I0117 12:22:48.474781 2717 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:22:48.513916 kubelet[2717]: I0117 12:22:48.513886 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:22:48.513916 kubelet[2717]: I0117 12:22:48.513909 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:22:48.513916 kubelet[2717]: I0117 12:22:48.513925 2717 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:48.514057 kubelet[2717]: I0117 12:22:48.514050 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:22:48.514086 kubelet[2717]: I0117 12:22:48.514068 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:22:48.514086 kubelet[2717]: I0117 12:22:48.514075 2717 policy_none.go:49] "None policy: Start" Jan 17 12:22:48.514711 kubelet[2717]: I0117 12:22:48.514692 2717 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:22:48.514711 kubelet[2717]: I0117 12:22:48.514711 2717 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:22:48.514845 kubelet[2717]: I0117 12:22:48.514827 2717 state_mem.go:75] "Updated machine memory state" Jan 17 12:22:48.516173 kubelet[2717]: I0117 12:22:48.516108 2717 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:22:48.518242 kubelet[2717]: I0117 12:22:48.518076 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:22:48.566825 kubelet[2717]: I0117 12:22:48.565506 2717 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:22:48.572306 kubelet[2717]: I0117 12:22:48.572274 2717 topology_manager.go:215] "Topology Admit Handler" podUID="403191808deda6abddccc06e53df2d12" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:22:48.572413 kubelet[2717]: I0117 12:22:48.572366 2717 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:22:48.572442 kubelet[2717]: I0117 12:22:48.572417 2717 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:22:48.574608 kubelet[2717]: I0117 12:22:48.573297 2717 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:22:48.574608 kubelet[2717]: I0117 12:22:48.573389 2717 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:22:48.578952 kubelet[2717]: E0117 12:22:48.578892 2717 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:22:48.764996 kubelet[2717]: I0117 12:22:48.764961 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:48.764996 kubelet[2717]: I0117 12:22:48.765007 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:48.765150 kubelet[2717]: I0117 12:22:48.765033 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:48.765150 kubelet[2717]: I0117 12:22:48.765056 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/403191808deda6abddccc06e53df2d12-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"403191808deda6abddccc06e53df2d12\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:22:48.765150 kubelet[2717]: I0117 12:22:48.765077 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/403191808deda6abddccc06e53df2d12-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"403191808deda6abddccc06e53df2d12\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:22:48.765150 kubelet[2717]: I0117 12:22:48.765097 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:48.765150 kubelet[2717]: I0117 12:22:48.765145 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/403191808deda6abddccc06e53df2d12-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"403191808deda6abddccc06e53df2d12\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:22:48.765260 kubelet[2717]: I0117 12:22:48.765167 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:22:48.765260 kubelet[2717]: I0117 12:22:48.765187 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:22:48.877416 kubelet[2717]: E0117 12:22:48.877306 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:48.880420 kubelet[2717]: E0117 12:22:48.880392 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:48.881513 kubelet[2717]: E0117 12:22:48.880936 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:49.446542 kubelet[2717]: I0117 12:22:49.446494 2717 apiserver.go:52] "Watching apiserver" Jan 17 12:22:49.464145 kubelet[2717]: I0117 12:22:49.464087 2717 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:49.485926 kubelet[2717]: E0117 12:22:49.485326 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:49.485926 kubelet[2717]: E0117 12:22:49.485432 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:49.490256 kubelet[2717]: E0117 12:22:49.489778 2717 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:22:49.490436 kubelet[2717]: E0117 12:22:49.490424 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:49.515096 kubelet[2717]: I0117 12:22:49.514308 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5142675799999998 podStartE2EDuration="1.51426758s" podCreationTimestamp="2025-01-17 12:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:49.504854844 +0000 UTC m=+1.118149683" watchObservedRunningTime="2025-01-17 12:22:49.51426758 +0000 UTC m=+1.127562419" Jan 17 12:22:49.524262 kubelet[2717]: I0117 12:22:49.524112 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.524078379 podStartE2EDuration="3.524078379s" podCreationTimestamp="2025-01-17 12:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:49.514246543 +0000 UTC m=+1.127541382" watchObservedRunningTime="2025-01-17 12:22:49.524078379 +0000 UTC m=+1.137373218" Jan 17 12:22:49.524553 kubelet[2717]: I0117 12:22:49.524532 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.524504078 podStartE2EDuration="1.524504078s" podCreationTimestamp="2025-01-17 12:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:49.523263535 +0000 UTC m=+1.136558374" watchObservedRunningTime="2025-01-17 12:22:49.524504078 +0000 UTC m=+1.137798917" Jan 17 12:22:50.487217 kubelet[2717]: E0117 12:22:50.487168 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:51.487995 kubelet[2717]: E0117 12:22:51.487902 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:52.428582 sudo[1749]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:52.431685 sshd[1742]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:52.435553 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:38196.service: Deactivated successfully. Jan 17 12:22:52.440033 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:22:52.440737 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:22:52.441567 systemd-logind[1520]: Removed session 7. Jan 17 12:22:54.758390 kubelet[2717]: E0117 12:22:54.758270 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:55.493556 kubelet[2717]: E0117 12:22:55.493531 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:55.881854 kubelet[2717]: E0117 12:22:55.881748 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:56.495941 kubelet[2717]: E0117 12:22:56.495914 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:00.354939 kubelet[2717]: E0117 12:23:00.354903 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:01.281544 kubelet[2717]: I0117 12:23:01.281504 2717 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:23:01.297854 containerd[1548]: time="2025-01-17T12:23:01.297802980Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:23:01.298453 kubelet[2717]: I0117 12:23:01.298425 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:23:01.905021 kubelet[2717]: I0117 12:23:01.904979 2717 topology_manager.go:215] "Topology Admit Handler" podUID="8e91c5bd-804d-4d6f-9c47-84cc4abab3a9" podNamespace="kube-system" podName="kube-proxy-ttmc6" Jan 17 12:23:01.952702 kubelet[2717]: I0117 12:23:01.952655 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e91c5bd-804d-4d6f-9c47-84cc4abab3a9-xtables-lock\") pod \"kube-proxy-ttmc6\" (UID: \"8e91c5bd-804d-4d6f-9c47-84cc4abab3a9\") " pod="kube-system/kube-proxy-ttmc6" Jan 17 12:23:01.952702 kubelet[2717]: I0117 12:23:01.952702 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e91c5bd-804d-4d6f-9c47-84cc4abab3a9-kube-proxy\") pod \"kube-proxy-ttmc6\" (UID: \"8e91c5bd-804d-4d6f-9c47-84cc4abab3a9\") " pod="kube-system/kube-proxy-ttmc6" Jan 17 12:23:01.952839 kubelet[2717]: I0117 12:23:01.952724 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e91c5bd-804d-4d6f-9c47-84cc4abab3a9-lib-modules\") pod \"kube-proxy-ttmc6\" (UID: \"8e91c5bd-804d-4d6f-9c47-84cc4abab3a9\") " pod="kube-system/kube-proxy-ttmc6" Jan 17 12:23:01.952839 kubelet[2717]: I0117 12:23:01.952745 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4nm7\" (UniqueName: \"kubernetes.io/projected/8e91c5bd-804d-4d6f-9c47-84cc4abab3a9-kube-api-access-q4nm7\") pod \"kube-proxy-ttmc6\" (UID: \"8e91c5bd-804d-4d6f-9c47-84cc4abab3a9\") " pod="kube-system/kube-proxy-ttmc6" Jan 17 12:23:02.065660 kubelet[2717]: E0117 12:23:02.065609 2717 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:23:02.065660 kubelet[2717]: E0117 12:23:02.065649 2717 projected.go:200] Error preparing data for projected volume kube-api-access-q4nm7 for pod kube-system/kube-proxy-ttmc6: configmap "kube-root-ca.crt" not found Jan 17 12:23:02.065782 kubelet[2717]: E0117 12:23:02.065714 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e91c5bd-804d-4d6f-9c47-84cc4abab3a9-kube-api-access-q4nm7 podName:8e91c5bd-804d-4d6f-9c47-84cc4abab3a9 nodeName:}" failed. No retries permitted until 2025-01-17 12:23:02.565695429 +0000 UTC m=+14.178990268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q4nm7" (UniqueName: "kubernetes.io/projected/8e91c5bd-804d-4d6f-9c47-84cc4abab3a9-kube-api-access-q4nm7") pod "kube-proxy-ttmc6" (UID: "8e91c5bd-804d-4d6f-9c47-84cc4abab3a9") : configmap "kube-root-ca.crt" not found Jan 17 12:23:02.406139 kubelet[2717]: I0117 12:23:02.404169 2717 topology_manager.go:215] "Topology Admit Handler" podUID="a2250b0f-bafb-4c68-9145-a867b9c22d31" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-m7n5v" Jan 17 12:23:02.455805 kubelet[2717]: I0117 12:23:02.455766 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2250b0f-bafb-4c68-9145-a867b9c22d31-var-lib-calico\") pod \"tigera-operator-c7ccbd65-m7n5v\" (UID: \"a2250b0f-bafb-4c68-9145-a867b9c22d31\") " pod="tigera-operator/tigera-operator-c7ccbd65-m7n5v" Jan 17 12:23:02.456019 kubelet[2717]: I0117 12:23:02.456006 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dv72\" (UniqueName: \"kubernetes.io/projected/a2250b0f-bafb-4c68-9145-a867b9c22d31-kube-api-access-5dv72\") pod \"tigera-operator-c7ccbd65-m7n5v\" (UID: \"a2250b0f-bafb-4c68-9145-a867b9c22d31\") " pod="tigera-operator/tigera-operator-c7ccbd65-m7n5v" Jan 17 12:23:02.709540 containerd[1548]: time="2025-01-17T12:23:02.709441349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-m7n5v,Uid:a2250b0f-bafb-4c68-9145-a867b9c22d31,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:23:02.732916 containerd[1548]: time="2025-01-17T12:23:02.732844224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:02.732916 containerd[1548]: time="2025-01-17T12:23:02.732895701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:02.733061 containerd[1548]: time="2025-01-17T12:23:02.732911340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:02.733249 containerd[1548]: time="2025-01-17T12:23:02.733138926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:02.770031 containerd[1548]: time="2025-01-17T12:23:02.769907738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-m7n5v,Uid:a2250b0f-bafb-4c68-9145-a867b9c22d31,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4f39df237df8399656fbc5e0793ab8eea8bd17d084b431051d0b09506c293b1e\"" Jan 17 12:23:02.775321 containerd[1548]: time="2025-01-17T12:23:02.774806475Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:23:02.808362 kubelet[2717]: E0117 12:23:02.808088 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:02.809014 containerd[1548]: time="2025-01-17T12:23:02.808438360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ttmc6,Uid:8e91c5bd-804d-4d6f-9c47-84cc4abab3a9,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:02.826459 containerd[1548]: time="2025-01-17T12:23:02.826387453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:02.826559 containerd[1548]: time="2025-01-17T12:23:02.826439249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:02.826559 containerd[1548]: time="2025-01-17T12:23:02.826456248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:02.826559 containerd[1548]: time="2025-01-17T12:23:02.826534924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:02.854696 containerd[1548]: time="2025-01-17T12:23:02.854615511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ttmc6,Uid:8e91c5bd-804d-4d6f-9c47-84cc4abab3a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"20b1296613d13f1d12afe6fbde37b625e739411e2cbfc524bf56f4b055c404d7\"" Jan 17 12:23:02.856405 kubelet[2717]: E0117 12:23:02.855132 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:02.856655 containerd[1548]: time="2025-01-17T12:23:02.856615507Z" level=info msg="CreateContainer within sandbox \"20b1296613d13f1d12afe6fbde37b625e739411e2cbfc524bf56f4b055c404d7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:23:02.875009 containerd[1548]: time="2025-01-17T12:23:02.874912059Z" level=info msg="CreateContainer within sandbox \"20b1296613d13f1d12afe6fbde37b625e739411e2cbfc524bf56f4b055c404d7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1bb1609c60b5ae84d9101b6df96add243b487e915a9f085d85f509fa4bb60ae8\"" Jan 17 12:23:02.877305 containerd[1548]: time="2025-01-17T12:23:02.877264593Z" level=info msg="StartContainer for \"1bb1609c60b5ae84d9101b6df96add243b487e915a9f085d85f509fa4bb60ae8\"" Jan 17 12:23:02.920042 containerd[1548]: time="2025-01-17T12:23:02.919997877Z" level=info msg="StartContainer for \"1bb1609c60b5ae84d9101b6df96add243b487e915a9f085d85f509fa4bb60ae8\" returns successfully" Jan 17 12:23:03.131542 update_engine[1534]: I20250117 12:23:03.131451 1534 update_attempter.cc:509] Updating boot flags... Jan 17 12:23:03.150424 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2975) Jan 17 12:23:03.181411 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2978) Jan 17 12:23:03.506978 kubelet[2717]: E0117 12:23:03.506866 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:03.572699 systemd[1]: run-containerd-runc-k8s.io-4f39df237df8399656fbc5e0793ab8eea8bd17d084b431051d0b09506c293b1e-runc.2rWKmO.mount: Deactivated successfully. Jan 17 12:23:08.733378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154791357.mount: Deactivated successfully. Jan 17 12:23:09.074132 containerd[1548]: time="2025-01-17T12:23:09.074083449Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:09.080367 containerd[1548]: time="2025-01-17T12:23:09.080309204Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125956" Jan 17 12:23:09.081236 containerd[1548]: time="2025-01-17T12:23:09.081204169Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:09.083373 containerd[1548]: time="2025-01-17T12:23:09.083233129Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:09.084171 containerd[1548]: time="2025-01-17T12:23:09.084135534Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 6.309283581s" Jan 17 12:23:09.084171 containerd[1548]: time="2025-01-17T12:23:09.084169533Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 17 12:23:09.088146 containerd[1548]: time="2025-01-17T12:23:09.088106778Z" level=info msg="CreateContainer within sandbox \"4f39df237df8399656fbc5e0793ab8eea8bd17d084b431051d0b09506c293b1e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:23:09.097418 containerd[1548]: time="2025-01-17T12:23:09.097380574Z" level=info msg="CreateContainer within sandbox \"4f39df237df8399656fbc5e0793ab8eea8bd17d084b431051d0b09506c293b1e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7c7e5d3b5f29c22f53779e39f5369f52090ca426434f6218a812a9012229b3b0\"" Jan 17 12:23:09.098210 containerd[1548]: time="2025-01-17T12:23:09.098184102Z" level=info msg="StartContainer for \"7c7e5d3b5f29c22f53779e39f5369f52090ca426434f6218a812a9012229b3b0\"" Jan 17 12:23:09.140990 containerd[1548]: time="2025-01-17T12:23:09.140949583Z" level=info msg="StartContainer for \"7c7e5d3b5f29c22f53779e39f5369f52090ca426434f6218a812a9012229b3b0\" returns successfully" Jan 17 12:23:09.565415 kubelet[2717]: I0117 12:23:09.565227 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ttmc6" podStartSLOduration=8.565190362 podStartE2EDuration="8.565190362s" podCreationTimestamp="2025-01-17 12:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:03.514905474 +0000 UTC m=+15.128200313" watchObservedRunningTime="2025-01-17 12:23:09.565190362 +0000 UTC m=+21.178485201" Jan 17 12:23:09.566101 kubelet[2717]: I0117 12:23:09.566032 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-m7n5v" podStartSLOduration=1.253900925 podStartE2EDuration="7.56600277s" podCreationTimestamp="2025-01-17 12:23:02 +0000 UTC" firstStartedPulling="2025-01-17 12:23:02.773712783 +0000 UTC m=+14.387007622" lastFinishedPulling="2025-01-17 12:23:09.085814628 +0000 UTC m=+20.699109467" observedRunningTime="2025-01-17 12:23:09.565439752 +0000 UTC m=+21.178734591" watchObservedRunningTime="2025-01-17 12:23:09.56600277 +0000 UTC m=+21.179297649" Jan 17 12:23:12.844084 kubelet[2717]: I0117 12:23:12.844041 2717 topology_manager.go:215] "Topology Admit Handler" podUID="ed9546f5-ad86-4b53-a57b-f4783d2408b5" podNamespace="calico-system" podName="calico-typha-569cf484b5-5nvhc" Jan 17 12:23:12.874550 kubelet[2717]: I0117 12:23:12.874511 2717 topology_manager.go:215] "Topology Admit Handler" podUID="338f040e-ba71-444f-becd-7e46d7002c8f" podNamespace="calico-system" podName="calico-node-448hz" Jan 17 12:23:12.928783 kubelet[2717]: I0117 12:23:12.928753 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-var-run-calico\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936367 kubelet[2717]: I0117 12:23:12.936144 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbwt\" (UniqueName: \"kubernetes.io/projected/338f040e-ba71-444f-becd-7e46d7002c8f-kube-api-access-pcbwt\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936714 kubelet[2717]: I0117 12:23:12.936693 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed9546f5-ad86-4b53-a57b-f4783d2408b5-tigera-ca-bundle\") pod \"calico-typha-569cf484b5-5nvhc\" (UID: \"ed9546f5-ad86-4b53-a57b-f4783d2408b5\") " pod="calico-system/calico-typha-569cf484b5-5nvhc" Jan 17 12:23:12.936765 kubelet[2717]: I0117 12:23:12.936738 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-lib-modules\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936765 kubelet[2717]: I0117 12:23:12.936763 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-var-lib-calico\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936865 kubelet[2717]: I0117 12:23:12.936783 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-cni-net-dir\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936865 kubelet[2717]: I0117 12:23:12.936811 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-flexvol-driver-host\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936865 kubelet[2717]: I0117 12:23:12.936837 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/338f040e-ba71-444f-becd-7e46d7002c8f-tigera-ca-bundle\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936865 kubelet[2717]: I0117 12:23:12.936856 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/338f040e-ba71-444f-becd-7e46d7002c8f-node-certs\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936956 kubelet[2717]: I0117 12:23:12.936875 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-cni-bin-dir\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936956 kubelet[2717]: I0117 12:23:12.936905 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67lqm\" (UniqueName: \"kubernetes.io/projected/ed9546f5-ad86-4b53-a57b-f4783d2408b5-kube-api-access-67lqm\") pod \"calico-typha-569cf484b5-5nvhc\" (UID: \"ed9546f5-ad86-4b53-a57b-f4783d2408b5\") " pod="calico-system/calico-typha-569cf484b5-5nvhc" Jan 17 12:23:12.936956 kubelet[2717]: I0117 12:23:12.936925 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-cni-log-dir\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.936956 kubelet[2717]: I0117 12:23:12.936945 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-xtables-lock\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.937048 kubelet[2717]: I0117 12:23:12.936964 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/338f040e-ba71-444f-becd-7e46d7002c8f-policysync\") pod \"calico-node-448hz\" (UID: \"338f040e-ba71-444f-becd-7e46d7002c8f\") " pod="calico-system/calico-node-448hz" Jan 17 12:23:12.937048 kubelet[2717]: I0117 12:23:12.936988 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ed9546f5-ad86-4b53-a57b-f4783d2408b5-typha-certs\") pod \"calico-typha-569cf484b5-5nvhc\" (UID: \"ed9546f5-ad86-4b53-a57b-f4783d2408b5\") " pod="calico-system/calico-typha-569cf484b5-5nvhc" Jan 17 12:23:12.989873 kubelet[2717]: I0117 12:23:12.989722 2717 topology_manager.go:215] "Topology Admit Handler" podUID="0f7f2ce6-11ce-4b25-b85f-2f1455b73126" podNamespace="calico-system" podName="csi-node-driver-d8gl2" Jan 17 12:23:12.991747 kubelet[2717]: E0117 12:23:12.991599 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d8gl2" podUID="0f7f2ce6-11ce-4b25-b85f-2f1455b73126" Jan 17 12:23:13.037155 kubelet[2717]: I0117 12:23:13.037110 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f7f2ce6-11ce-4b25-b85f-2f1455b73126-kubelet-dir\") pod \"csi-node-driver-d8gl2\" (UID: \"0f7f2ce6-11ce-4b25-b85f-2f1455b73126\") " pod="calico-system/csi-node-driver-d8gl2" Jan 17 12:23:13.037332 kubelet[2717]: I0117 12:23:13.037197 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0f7f2ce6-11ce-4b25-b85f-2f1455b73126-varrun\") pod \"csi-node-driver-d8gl2\" (UID: \"0f7f2ce6-11ce-4b25-b85f-2f1455b73126\") " pod="calico-system/csi-node-driver-d8gl2" Jan 17 12:23:13.037332 kubelet[2717]: I0117 12:23:13.037295 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f7f2ce6-11ce-4b25-b85f-2f1455b73126-socket-dir\") pod \"csi-node-driver-d8gl2\" (UID: \"0f7f2ce6-11ce-4b25-b85f-2f1455b73126\") " pod="calico-system/csi-node-driver-d8gl2" Jan 17 12:23:13.037416 kubelet[2717]: I0117 12:23:13.037345 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f7f2ce6-11ce-4b25-b85f-2f1455b73126-registration-dir\") pod \"csi-node-driver-d8gl2\" (UID: \"0f7f2ce6-11ce-4b25-b85f-2f1455b73126\") " pod="calico-system/csi-node-driver-d8gl2" Jan 17 12:23:13.037416 kubelet[2717]: I0117 12:23:13.037401 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gssvb\" (UniqueName: \"kubernetes.io/projected/0f7f2ce6-11ce-4b25-b85f-2f1455b73126-kube-api-access-gssvb\") pod \"csi-node-driver-d8gl2\" (UID: \"0f7f2ce6-11ce-4b25-b85f-2f1455b73126\") " pod="calico-system/csi-node-driver-d8gl2" Jan 17 12:23:13.042443 kubelet[2717]: E0117 12:23:13.042195 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.042443 kubelet[2717]: W0117 12:23:13.042226 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.042443 kubelet[2717]: E0117 12:23:13.042248 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.046017 kubelet[2717]: E0117 12:23:13.045991 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.046017 kubelet[2717]: W0117 12:23:13.046011 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.047489 kubelet[2717]: E0117 12:23:13.046029 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.051603 kubelet[2717]: E0117 12:23:13.049931 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.051603 kubelet[2717]: W0117 12:23:13.049952 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.051603 kubelet[2717]: E0117 12:23:13.050069 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.055364 kubelet[2717]: E0117 12:23:13.053130 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.055364 kubelet[2717]: W0117 12:23:13.053146 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.055364 kubelet[2717]: E0117 12:23:13.053163 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.055364 kubelet[2717]: E0117 12:23:13.055151 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.055364 kubelet[2717]: W0117 12:23:13.055163 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.055364 kubelet[2717]: E0117 12:23:13.055177 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.139015 kubelet[2717]: E0117 12:23:13.138826 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.139015 kubelet[2717]: W0117 12:23:13.138850 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.139015 kubelet[2717]: E0117 12:23:13.138872 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.139217 kubelet[2717]: E0117 12:23:13.139182 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.139217 kubelet[2717]: W0117 12:23:13.139196 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.139337 kubelet[2717]: E0117 12:23:13.139216 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.140020 kubelet[2717]: E0117 12:23:13.139992 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.140020 kubelet[2717]: W0117 12:23:13.140012 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.140020 kubelet[2717]: E0117 12:23:13.140032 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.140905 kubelet[2717]: E0117 12:23:13.140296 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.140905 kubelet[2717]: W0117 12:23:13.140309 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.140905 kubelet[2717]: E0117 12:23:13.140321 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.144615 kubelet[2717]: E0117 12:23:13.144587 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.144615 kubelet[2717]: W0117 12:23:13.144604 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.144716 kubelet[2717]: E0117 12:23:13.144651 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.144990 kubelet[2717]: E0117 12:23:13.144939 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.144990 kubelet[2717]: W0117 12:23:13.144955 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.144990 kubelet[2717]: E0117 12:23:13.144968 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.145329 kubelet[2717]: E0117 12:23:13.145175 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.145329 kubelet[2717]: W0117 12:23:13.145189 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.145329 kubelet[2717]: E0117 12:23:13.145206 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.145626 kubelet[2717]: E0117 12:23:13.145425 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.145626 kubelet[2717]: W0117 12:23:13.145435 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.145626 kubelet[2717]: E0117 12:23:13.145452 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.145767 kubelet[2717]: E0117 12:23:13.145752 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.145839 kubelet[2717]: W0117 12:23:13.145827 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.145953 kubelet[2717]: E0117 12:23:13.145896 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.146146 kubelet[2717]: E0117 12:23:13.146134 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.146217 kubelet[2717]: W0117 12:23:13.146205 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.146377 kubelet[2717]: E0117 12:23:13.146274 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.146486 kubelet[2717]: E0117 12:23:13.146475 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.146543 kubelet[2717]: W0117 12:23:13.146532 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.146605 kubelet[2717]: E0117 12:23:13.146596 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.146831 kubelet[2717]: E0117 12:23:13.146814 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.146831 kubelet[2717]: W0117 12:23:13.146829 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.146901 kubelet[2717]: E0117 12:23:13.146846 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.147102 kubelet[2717]: E0117 12:23:13.147090 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.147102 kubelet[2717]: W0117 12:23:13.147101 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.147201 kubelet[2717]: E0117 12:23:13.147182 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.147631 kubelet[2717]: E0117 12:23:13.147591 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.147631 kubelet[2717]: W0117 12:23:13.147610 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.147721 kubelet[2717]: E0117 12:23:13.147663 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.147972 kubelet[2717]: E0117 12:23:13.147854 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.147972 kubelet[2717]: W0117 12:23:13.147867 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.147972 kubelet[2717]: E0117 12:23:13.147948 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.148231 kubelet[2717]: E0117 12:23:13.148013 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.148231 kubelet[2717]: W0117 12:23:13.148019 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.148370 kubelet[2717]: E0117 12:23:13.148341 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.148577 kubelet[2717]: W0117 12:23:13.148432 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.148577 kubelet[2717]: E0117 12:23:13.148527 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.149283 kubelet[2717]: E0117 12:23:13.148468 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.149645 kubelet[2717]: E0117 12:23:13.149595 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.149821 kubelet[2717]: W0117 12:23:13.149720 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.149821 kubelet[2717]: E0117 12:23:13.149761 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.149983 kubelet[2717]: E0117 12:23:13.149971 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.150082 kubelet[2717]: W0117 12:23:13.150036 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.150169 kubelet[2717]: E0117 12:23:13.150132 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.151661 kubelet[2717]: E0117 12:23:13.151316 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:13.151661 kubelet[2717]: E0117 12:23:13.151387 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.151661 kubelet[2717]: W0117 12:23:13.151401 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.152376 kubelet[2717]: E0117 12:23:13.151878 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.152376 kubelet[2717]: W0117 12:23:13.151894 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.152376 kubelet[2717]: E0117 12:23:13.151907 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.152376 kubelet[2717]: E0117 12:23:13.152134 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.152376 kubelet[2717]: W0117 12:23:13.152140 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.152376 kubelet[2717]: E0117 12:23:13.152149 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.152376 kubelet[2717]: E0117 12:23:13.152321 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.152376 kubelet[2717]: W0117 12:23:13.152328 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.152376 kubelet[2717]: E0117 12:23:13.152337 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.153660 kubelet[2717]: E0117 12:23:13.152686 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.153697 containerd[1548]: time="2025-01-17T12:23:13.152446993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-569cf484b5-5nvhc,Uid:ed9546f5-ad86-4b53-a57b-f4783d2408b5,Namespace:calico-system,Attempt:0,}" Jan 17 12:23:13.154890 kubelet[2717]: E0117 12:23:13.154659 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.154890 kubelet[2717]: W0117 12:23:13.154711 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.154890 kubelet[2717]: E0117 12:23:13.154729 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.155113 kubelet[2717]: E0117 12:23:13.155100 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.155208 kubelet[2717]: W0117 12:23:13.155168 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.155208 kubelet[2717]: E0117 12:23:13.155187 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.162754 kubelet[2717]: E0117 12:23:13.162732 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:13.162754 kubelet[2717]: W0117 12:23:13.162748 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:13.162885 kubelet[2717]: E0117 12:23:13.162767 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:13.177483 kubelet[2717]: E0117 12:23:13.177454 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:13.178128 containerd[1548]: time="2025-01-17T12:23:13.178088615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-448hz,Uid:338f040e-ba71-444f-becd-7e46d7002c8f,Namespace:calico-system,Attempt:0,}" Jan 17 12:23:13.213877 containerd[1548]: time="2025-01-17T12:23:13.213635177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:13.213877 containerd[1548]: time="2025-01-17T12:23:13.213691855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:13.213877 containerd[1548]: time="2025-01-17T12:23:13.213706095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:13.214819 containerd[1548]: time="2025-01-17T12:23:13.214025045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:13.215397 containerd[1548]: time="2025-01-17T12:23:13.214864940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:13.215397 containerd[1548]: time="2025-01-17T12:23:13.214916418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:13.215397 containerd[1548]: time="2025-01-17T12:23:13.214967897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:13.215397 containerd[1548]: time="2025-01-17T12:23:13.215053134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:13.254061 containerd[1548]: time="2025-01-17T12:23:13.254002912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-448hz,Uid:338f040e-ba71-444f-becd-7e46d7002c8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fab4da34dbaf7c5130977784a45b784b169f00927fc59c7a87cc684898db2ac\"" Jan 17 12:23:13.255319 kubelet[2717]: E0117 12:23:13.255280 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:13.259703 containerd[1548]: time="2025-01-17T12:23:13.259665581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:23:13.271111 containerd[1548]: time="2025-01-17T12:23:13.271072475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-569cf484b5-5nvhc,Uid:ed9546f5-ad86-4b53-a57b-f4783d2408b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"e57cbf72d15a892219b721a90cb4d5e2a11839b39d152af1c2bd672bf7bd7c3b\"" Jan 17 12:23:13.271910 kubelet[2717]: E0117 12:23:13.271892 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:14.137917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798890420.mount: Deactivated successfully. Jan 17 12:23:14.194819 containerd[1548]: time="2025-01-17T12:23:14.194764899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:14.195734 containerd[1548]: time="2025-01-17T12:23:14.195691313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 17 12:23:14.196531 containerd[1548]: time="2025-01-17T12:23:14.196508330Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:14.198332 containerd[1548]: time="2025-01-17T12:23:14.198287559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:14.199179 containerd[1548]: time="2025-01-17T12:23:14.199144855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 939.401356ms" Jan 17 12:23:14.199221 containerd[1548]: time="2025-01-17T12:23:14.199179254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 17 12:23:14.200484 containerd[1548]: time="2025-01-17T12:23:14.200443978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:23:14.215666 containerd[1548]: time="2025-01-17T12:23:14.215617666Z" level=info msg="CreateContainer within sandbox \"5fab4da34dbaf7c5130977784a45b784b169f00927fc59c7a87cc684898db2ac\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:23:14.225332 containerd[1548]: time="2025-01-17T12:23:14.225221513Z" level=info msg="CreateContainer within sandbox \"5fab4da34dbaf7c5130977784a45b784b169f00927fc59c7a87cc684898db2ac\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b0338a39e67e300f7d695e33ef75b68f361737045ab1c259774d51cd573c5a9d\"" Jan 17 12:23:14.226579 containerd[1548]: time="2025-01-17T12:23:14.226533796Z" level=info msg="StartContainer for \"b0338a39e67e300f7d695e33ef75b68f361737045ab1c259774d51cd573c5a9d\"" Jan 17 12:23:14.288782 containerd[1548]: time="2025-01-17T12:23:14.288729867Z" level=info msg="StartContainer for \"b0338a39e67e300f7d695e33ef75b68f361737045ab1c259774d51cd573c5a9d\" returns successfully" Jan 17 12:23:14.333371 containerd[1548]: time="2025-01-17T12:23:14.333278000Z" level=info msg="shim disconnected" id=b0338a39e67e300f7d695e33ef75b68f361737045ab1c259774d51cd573c5a9d namespace=k8s.io Jan 17 12:23:14.333371 containerd[1548]: time="2025-01-17T12:23:14.333373717Z" level=warning msg="cleaning up after shim disconnected" id=b0338a39e67e300f7d695e33ef75b68f361737045ab1c259774d51cd573c5a9d namespace=k8s.io Jan 17 12:23:14.333578 containerd[1548]: time="2025-01-17T12:23:14.333385557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:14.473289 kubelet[2717]: E0117 12:23:14.473166 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d8gl2" podUID="0f7f2ce6-11ce-4b25-b85f-2f1455b73126" Jan 17 12:23:14.562766 kubelet[2717]: E0117 12:23:14.562729 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:15.046032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0338a39e67e300f7d695e33ef75b68f361737045ab1c259774d51cd573c5a9d-rootfs.mount: Deactivated successfully. Jan 17 12:23:15.325980 containerd[1548]: time="2025-01-17T12:23:15.325865346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:15.326565 containerd[1548]: time="2025-01-17T12:23:15.326530249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 17 12:23:15.327473 containerd[1548]: time="2025-01-17T12:23:15.327449584Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:15.330637 containerd[1548]: time="2025-01-17T12:23:15.329303335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:15.330637 containerd[1548]: time="2025-01-17T12:23:15.330535982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.130053605s" Jan 17 12:23:15.330637 containerd[1548]: time="2025-01-17T12:23:15.330563741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 17 12:23:15.332988 containerd[1548]: time="2025-01-17T12:23:15.332951957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:23:15.342653 containerd[1548]: time="2025-01-17T12:23:15.342614140Z" level=info msg="CreateContainer within sandbox \"e57cbf72d15a892219b721a90cb4d5e2a11839b39d152af1c2bd672bf7bd7c3b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:23:15.352516 containerd[1548]: time="2025-01-17T12:23:15.352484437Z" level=info msg="CreateContainer within sandbox \"e57cbf72d15a892219b721a90cb4d5e2a11839b39d152af1c2bd672bf7bd7c3b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"66444e935c2bc62ebda8ae1c01befd793e7e5f2293fad8f198af91cb166641cc\"" Jan 17 12:23:15.354434 containerd[1548]: time="2025-01-17T12:23:15.354407225Z" level=info msg="StartContainer for \"66444e935c2bc62ebda8ae1c01befd793e7e5f2293fad8f198af91cb166641cc\"" Jan 17 12:23:15.429236 containerd[1548]: time="2025-01-17T12:23:15.429189431Z" level=info msg="StartContainer for \"66444e935c2bc62ebda8ae1c01befd793e7e5f2293fad8f198af91cb166641cc\" returns successfully" Jan 17 12:23:15.568174 kubelet[2717]: E0117 12:23:15.567274 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:15.585455 kubelet[2717]: I0117 12:23:15.584936 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-569cf484b5-5nvhc" podStartSLOduration=1.5275036260000001 podStartE2EDuration="3.58489012s" podCreationTimestamp="2025-01-17 12:23:12 +0000 UTC" firstStartedPulling="2025-01-17 12:23:13.274157221 +0000 UTC m=+24.887452060" lastFinishedPulling="2025-01-17 12:23:15.331543715 +0000 UTC m=+26.944838554" observedRunningTime="2025-01-17 12:23:15.584188019 +0000 UTC m=+27.197482858" watchObservedRunningTime="2025-01-17 12:23:15.58489012 +0000 UTC m=+27.198184959" Jan 17 12:23:16.472771 kubelet[2717]: E0117 12:23:16.472724 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d8gl2" podUID="0f7f2ce6-11ce-4b25-b85f-2f1455b73126" Jan 17 12:23:16.573086 kubelet[2717]: I0117 12:23:16.573051 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:16.574379 kubelet[2717]: E0117 12:23:16.574361 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:17.388033 containerd[1548]: time="2025-01-17T12:23:17.387990597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:17.390038 containerd[1548]: time="2025-01-17T12:23:17.389992630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 17 12:23:17.390841 containerd[1548]: time="2025-01-17T12:23:17.390794931Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:17.392791 containerd[1548]: time="2025-01-17T12:23:17.392739726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:17.393625 containerd[1548]: time="2025-01-17T12:23:17.393592826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.060604629s" Jan 17 12:23:17.393686 containerd[1548]: time="2025-01-17T12:23:17.393632305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 17 12:23:17.396447 containerd[1548]: time="2025-01-17T12:23:17.396415160Z" level=info msg="CreateContainer within sandbox \"5fab4da34dbaf7c5130977784a45b784b169f00927fc59c7a87cc684898db2ac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:23:17.411088 containerd[1548]: time="2025-01-17T12:23:17.411036457Z" level=info msg="CreateContainer within sandbox \"5fab4da34dbaf7c5130977784a45b784b169f00927fc59c7a87cc684898db2ac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"63bbe1058d9165a1bfc28d453666fb566270883635df302e8796b892f44f86b9\"" Jan 17 12:23:17.411624 containerd[1548]: time="2025-01-17T12:23:17.411585084Z" level=info msg="StartContainer for \"63bbe1058d9165a1bfc28d453666fb566270883635df302e8796b892f44f86b9\"" Jan 17 12:23:17.466632 containerd[1548]: time="2025-01-17T12:23:17.466585435Z" level=info msg="StartContainer for \"63bbe1058d9165a1bfc28d453666fb566270883635df302e8796b892f44f86b9\" returns successfully" Jan 17 12:23:17.574546 kubelet[2717]: E0117 12:23:17.574395 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:17.994332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63bbe1058d9165a1bfc28d453666fb566270883635df302e8796b892f44f86b9-rootfs.mount: Deactivated successfully. Jan 17 12:23:18.004148 kubelet[2717]: I0117 12:23:17.999021 2717 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:23:18.026451 kubelet[2717]: I0117 12:23:18.026414 2717 topology_manager.go:215] "Topology Admit Handler" podUID="c85f2c0e-97a2-4021-ae15-4f96370ba9bd" podNamespace="kube-system" podName="coredns-76f75df574-rvkqr" Jan 17 12:23:18.028335 kubelet[2717]: I0117 12:23:18.028157 2717 topology_manager.go:215] "Topology Admit Handler" podUID="cf7e29e6-980f-42a1-85a4-fdb746002f5f" podNamespace="kube-system" podName="coredns-76f75df574-kf6mr" Jan 17 12:23:18.030184 kubelet[2717]: I0117 12:23:18.030134 2717 topology_manager.go:215] "Topology Admit Handler" podUID="85f28c23-7266-45b9-ab87-b4179415ea7d" podNamespace="calico-apiserver" podName="calico-apiserver-5d99b78498-wqsg9" Jan 17 12:23:18.030588 kubelet[2717]: I0117 12:23:18.030262 2717 topology_manager.go:215] "Topology Admit Handler" podUID="ff552645-5250-458a-ac60-d14e1b9bee85" podNamespace="calico-apiserver" podName="calico-apiserver-5d99b78498-h976w" Jan 17 12:23:18.033227 kubelet[2717]: I0117 12:23:18.031184 2717 topology_manager.go:215] "Topology Admit Handler" podUID="e256e234-13b1-4fd7-b50f-50db194b2888" podNamespace="calico-system" podName="calico-kube-controllers-9bdf448d6-lzvz2" Jan 17 12:23:18.085745 kubelet[2717]: I0117 12:23:18.085650 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmnr6\" (UniqueName: \"kubernetes.io/projected/85f28c23-7266-45b9-ab87-b4179415ea7d-kube-api-access-nmnr6\") pod \"calico-apiserver-5d99b78498-wqsg9\" (UID: \"85f28c23-7266-45b9-ab87-b4179415ea7d\") " pod="calico-apiserver/calico-apiserver-5d99b78498-wqsg9" Jan 17 12:23:18.085884 kubelet[2717]: I0117 12:23:18.085795 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff552645-5250-458a-ac60-d14e1b9bee85-calico-apiserver-certs\") pod \"calico-apiserver-5d99b78498-h976w\" (UID: \"ff552645-5250-458a-ac60-d14e1b9bee85\") " pod="calico-apiserver/calico-apiserver-5d99b78498-h976w" Jan 17 12:23:18.085884 kubelet[2717]: I0117 12:23:18.085846 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/85f28c23-7266-45b9-ab87-b4179415ea7d-calico-apiserver-certs\") pod \"calico-apiserver-5d99b78498-wqsg9\" (UID: \"85f28c23-7266-45b9-ab87-b4179415ea7d\") " pod="calico-apiserver/calico-apiserver-5d99b78498-wqsg9" Jan 17 12:23:18.085938 kubelet[2717]: I0117 12:23:18.085890 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfr2k\" (UniqueName: \"kubernetes.io/projected/cf7e29e6-980f-42a1-85a4-fdb746002f5f-kube-api-access-kfr2k\") pod \"coredns-76f75df574-kf6mr\" (UID: \"cf7e29e6-980f-42a1-85a4-fdb746002f5f\") " pod="kube-system/coredns-76f75df574-kf6mr" Jan 17 12:23:18.085938 kubelet[2717]: I0117 12:23:18.085923 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c85f2c0e-97a2-4021-ae15-4f96370ba9bd-config-volume\") pod \"coredns-76f75df574-rvkqr\" (UID: \"c85f2c0e-97a2-4021-ae15-4f96370ba9bd\") " pod="kube-system/coredns-76f75df574-rvkqr" Jan 17 12:23:18.085986 kubelet[2717]: I0117 12:23:18.085952 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf7e29e6-980f-42a1-85a4-fdb746002f5f-config-volume\") pod \"coredns-76f75df574-kf6mr\" (UID: \"cf7e29e6-980f-42a1-85a4-fdb746002f5f\") " pod="kube-system/coredns-76f75df574-kf6mr" Jan 17 12:23:18.085986 kubelet[2717]: I0117 12:23:18.085974 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27wjv\" (UniqueName: \"kubernetes.io/projected/ff552645-5250-458a-ac60-d14e1b9bee85-kube-api-access-27wjv\") pod \"calico-apiserver-5d99b78498-h976w\" (UID: \"ff552645-5250-458a-ac60-d14e1b9bee85\") " pod="calico-apiserver/calico-apiserver-5d99b78498-h976w" Jan 17 12:23:18.086082 kubelet[2717]: I0117 12:23:18.086053 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvpr\" (UniqueName: \"kubernetes.io/projected/c85f2c0e-97a2-4021-ae15-4f96370ba9bd-kube-api-access-2wvpr\") pod \"coredns-76f75df574-rvkqr\" (UID: \"c85f2c0e-97a2-4021-ae15-4f96370ba9bd\") " pod="kube-system/coredns-76f75df574-rvkqr" Jan 17 12:23:18.086115 kubelet[2717]: I0117 12:23:18.086097 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e256e234-13b1-4fd7-b50f-50db194b2888-tigera-ca-bundle\") pod \"calico-kube-controllers-9bdf448d6-lzvz2\" (UID: \"e256e234-13b1-4fd7-b50f-50db194b2888\") " pod="calico-system/calico-kube-controllers-9bdf448d6-lzvz2" Jan 17 12:23:18.086151 kubelet[2717]: I0117 12:23:18.086122 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66fn9\" (UniqueName: \"kubernetes.io/projected/e256e234-13b1-4fd7-b50f-50db194b2888-kube-api-access-66fn9\") pod \"calico-kube-controllers-9bdf448d6-lzvz2\" (UID: \"e256e234-13b1-4fd7-b50f-50db194b2888\") " pod="calico-system/calico-kube-controllers-9bdf448d6-lzvz2" Jan 17 12:23:18.089863 containerd[1548]: time="2025-01-17T12:23:18.089799320Z" level=info msg="shim disconnected" id=63bbe1058d9165a1bfc28d453666fb566270883635df302e8796b892f44f86b9 namespace=k8s.io Jan 17 12:23:18.089863 containerd[1548]: time="2025-01-17T12:23:18.089861879Z" level=warning msg="cleaning up after shim disconnected" id=63bbe1058d9165a1bfc28d453666fb566270883635df302e8796b892f44f86b9 namespace=k8s.io Jan 17 12:23:18.089988 containerd[1548]: time="2025-01-17T12:23:18.089869959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:18.391299 kubelet[2717]: E0117 12:23:18.391266 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:18.392554 containerd[1548]: time="2025-01-17T12:23:18.392324634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rvkqr,Uid:c85f2c0e-97a2-4021-ae15-4f96370ba9bd,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:18.393201 kubelet[2717]: E0117 12:23:18.392406 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:18.393243 containerd[1548]: time="2025-01-17T12:23:18.392776624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kf6mr,Uid:cf7e29e6-980f-42a1-85a4-fdb746002f5f,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:18.393725 containerd[1548]: time="2025-01-17T12:23:18.393678724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d99b78498-wqsg9,Uid:85f28c23-7266-45b9-ab87-b4179415ea7d,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:23:18.395151 containerd[1548]: time="2025-01-17T12:23:18.395120612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d99b78498-h976w,Uid:ff552645-5250-458a-ac60-d14e1b9bee85,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:23:18.397863 containerd[1548]: time="2025-01-17T12:23:18.397831553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9bdf448d6-lzvz2,Uid:e256e234-13b1-4fd7-b50f-50db194b2888,Namespace:calico-system,Attempt:0,}" Jan 17 12:23:18.488154 containerd[1548]: time="2025-01-17T12:23:18.488095250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d8gl2,Uid:0f7f2ce6-11ce-4b25-b85f-2f1455b73126,Namespace:calico-system,Attempt:0,}" Jan 17 12:23:18.589879 kubelet[2717]: E0117 12:23:18.587784 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:18.590386 containerd[1548]: time="2025-01-17T12:23:18.590310204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:23:18.760904 containerd[1548]: time="2025-01-17T12:23:18.760776019Z" level=error msg="Failed to destroy network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.761500 containerd[1548]: time="2025-01-17T12:23:18.761450324Z" level=error msg="encountered an error cleaning up failed sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.761571 containerd[1548]: time="2025-01-17T12:23:18.761528802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9bdf448d6-lzvz2,Uid:e256e234-13b1-4fd7-b50f-50db194b2888,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.764185 containerd[1548]: time="2025-01-17T12:23:18.764133225Z" level=error msg="Failed to destroy network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.764763 kubelet[2717]: E0117 12:23:18.764710 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.764836 kubelet[2717]: E0117 12:23:18.764799 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9bdf448d6-lzvz2" Jan 17 12:23:18.764836 kubelet[2717]: E0117 12:23:18.764821 2717 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9bdf448d6-lzvz2" Jan 17 12:23:18.764926 kubelet[2717]: E0117 12:23:18.764912 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9bdf448d6-lzvz2_calico-system(e256e234-13b1-4fd7-b50f-50db194b2888)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9bdf448d6-lzvz2_calico-system(e256e234-13b1-4fd7-b50f-50db194b2888)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9bdf448d6-lzvz2" podUID="e256e234-13b1-4fd7-b50f-50db194b2888" Jan 17 12:23:18.765252 containerd[1548]: time="2025-01-17T12:23:18.765219641Z" level=error msg="encountered an error cleaning up failed sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.765302 containerd[1548]: time="2025-01-17T12:23:18.765277760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rvkqr,Uid:c85f2c0e-97a2-4021-ae15-4f96370ba9bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.765484 kubelet[2717]: E0117 12:23:18.765466 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.765545 kubelet[2717]: E0117 12:23:18.765502 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rvkqr" Jan 17 12:23:18.765545 kubelet[2717]: E0117 12:23:18.765520 2717 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rvkqr" Jan 17 12:23:18.765603 kubelet[2717]: E0117 12:23:18.765561 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rvkqr_kube-system(c85f2c0e-97a2-4021-ae15-4f96370ba9bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rvkqr_kube-system(c85f2c0e-97a2-4021-ae15-4f96370ba9bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rvkqr" podUID="c85f2c0e-97a2-4021-ae15-4f96370ba9bd" Jan 17 12:23:18.765935 containerd[1548]: time="2025-01-17T12:23:18.765909306Z" level=error msg="Failed to destroy network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.766612 containerd[1548]: time="2025-01-17T12:23:18.766578971Z" level=error msg="encountered an error cleaning up failed sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.766681 containerd[1548]: time="2025-01-17T12:23:18.766661370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d99b78498-wqsg9,Uid:85f28c23-7266-45b9-ab87-b4179415ea7d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.766824 kubelet[2717]: E0117 12:23:18.766802 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.766878 kubelet[2717]: E0117 12:23:18.766841 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d99b78498-wqsg9" Jan 17 12:23:18.766878 kubelet[2717]: E0117 12:23:18.766858 2717 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d99b78498-wqsg9" Jan 17 12:23:18.766878 kubelet[2717]: E0117 12:23:18.766895 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d99b78498-wqsg9_calico-apiserver(85f28c23-7266-45b9-ab87-b4179415ea7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d99b78498-wqsg9_calico-apiserver(85f28c23-7266-45b9-ab87-b4179415ea7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d99b78498-wqsg9" podUID="85f28c23-7266-45b9-ab87-b4179415ea7d" Jan 17 12:23:18.767646 containerd[1548]: time="2025-01-17T12:23:18.767608909Z" level=error msg="Failed to destroy network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.768022 containerd[1548]: time="2025-01-17T12:23:18.767915022Z" level=error msg="encountered an error cleaning up failed sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.768022 containerd[1548]: time="2025-01-17T12:23:18.767967021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kf6mr,Uid:cf7e29e6-980f-42a1-85a4-fdb746002f5f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.768506 kubelet[2717]: E0117 12:23:18.768372 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.768506 kubelet[2717]: E0117 12:23:18.768413 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kf6mr" Jan 17 12:23:18.768506 kubelet[2717]: E0117 12:23:18.768430 2717 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kf6mr" Jan 17 12:23:18.768741 kubelet[2717]: E0117 12:23:18.768467 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kf6mr_kube-system(cf7e29e6-980f-42a1-85a4-fdb746002f5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kf6mr_kube-system(cf7e29e6-980f-42a1-85a4-fdb746002f5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kf6mr" podUID="cf7e29e6-980f-42a1-85a4-fdb746002f5f" Jan 17 12:23:18.772721 containerd[1548]: time="2025-01-17T12:23:18.772476922Z" level=error msg="Failed to destroy network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.773092 containerd[1548]: time="2025-01-17T12:23:18.772987311Z" level=error msg="encountered an error cleaning up failed sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.773132 containerd[1548]: time="2025-01-17T12:23:18.773103348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d99b78498-h976w,Uid:ff552645-5250-458a-ac60-d14e1b9bee85,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.773521 kubelet[2717]: E0117 12:23:18.773483 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.773578 kubelet[2717]: E0117 12:23:18.773549 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d99b78498-h976w" Jan 17 12:23:18.773578 kubelet[2717]: E0117 12:23:18.773570 2717 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d99b78498-h976w" Jan 17 12:23:18.773695 kubelet[2717]: E0117 12:23:18.773612 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d99b78498-h976w_calico-apiserver(ff552645-5250-458a-ac60-d14e1b9bee85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d99b78498-h976w_calico-apiserver(ff552645-5250-458a-ac60-d14e1b9bee85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d99b78498-h976w" podUID="ff552645-5250-458a-ac60-d14e1b9bee85" Jan 17 12:23:18.778805 containerd[1548]: time="2025-01-17T12:23:18.778761744Z" level=error msg="Failed to destroy network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.779229 containerd[1548]: time="2025-01-17T12:23:18.779198734Z" level=error msg="encountered an error cleaning up failed sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.779283 containerd[1548]: time="2025-01-17T12:23:18.779262213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d8gl2,Uid:0f7f2ce6-11ce-4b25-b85f-2f1455b73126,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.779454 kubelet[2717]: E0117 12:23:18.779434 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:18.779495 kubelet[2717]: E0117 12:23:18.779477 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d8gl2" Jan 17 12:23:18.779534 kubelet[2717]: E0117 12:23:18.779495 2717 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d8gl2" Jan 17 12:23:18.779562 kubelet[2717]: E0117 12:23:18.779542 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d8gl2_calico-system(0f7f2ce6-11ce-4b25-b85f-2f1455b73126)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d8gl2_calico-system(0f7f2ce6-11ce-4b25-b85f-2f1455b73126)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d8gl2" podUID="0f7f2ce6-11ce-4b25-b85f-2f1455b73126" Jan 17 12:23:19.404536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93-shm.mount: Deactivated successfully. Jan 17 12:23:19.404699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570-shm.mount: Deactivated successfully. Jan 17 12:23:19.404786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632-shm.mount: Deactivated successfully. Jan 17 12:23:19.404862 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944-shm.mount: Deactivated successfully. Jan 17 12:23:19.591525 kubelet[2717]: I0117 12:23:19.591497 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:19.592821 containerd[1548]: time="2025-01-17T12:23:19.592288763Z" level=info msg="StopPodSandbox for \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\"" Jan 17 12:23:19.592821 containerd[1548]: time="2025-01-17T12:23:19.592470519Z" level=info msg="Ensure that sandbox 7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570 in task-service has been cleanup successfully" Jan 17 12:23:19.594499 kubelet[2717]: I0117 12:23:19.594472 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:19.595838 containerd[1548]: time="2025-01-17T12:23:19.595772291Z" level=info msg="StopPodSandbox for \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\"" Jan 17 12:23:19.596528 containerd[1548]: time="2025-01-17T12:23:19.596336359Z" level=info msg="Ensure that sandbox 0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e in task-service has been cleanup successfully" Jan 17 12:23:19.596994 kubelet[2717]: I0117 12:23:19.596964 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:19.598153 containerd[1548]: time="2025-01-17T12:23:19.598025204Z" level=info msg="StopPodSandbox for \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\"" Jan 17 12:23:19.598337 containerd[1548]: time="2025-01-17T12:23:19.598309639Z" level=info msg="Ensure that sandbox 0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93 in task-service has been cleanup successfully" Jan 17 12:23:19.599902 kubelet[2717]: I0117 12:23:19.599259 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:19.601485 containerd[1548]: time="2025-01-17T12:23:19.600794827Z" level=info msg="StopPodSandbox for \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\"" Jan 17 12:23:19.601485 containerd[1548]: time="2025-01-17T12:23:19.601035902Z" level=info msg="Ensure that sandbox ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944 in task-service has been cleanup successfully" Jan 17 12:23:19.602230 kubelet[2717]: I0117 12:23:19.602186 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:19.603547 containerd[1548]: time="2025-01-17T12:23:19.603506172Z" level=info msg="StopPodSandbox for \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\"" Jan 17 12:23:19.603701 containerd[1548]: time="2025-01-17T12:23:19.603677488Z" level=info msg="Ensure that sandbox 85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632 in task-service has been cleanup successfully" Jan 17 12:23:19.605256 kubelet[2717]: I0117 12:23:19.605233 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:19.605767 containerd[1548]: time="2025-01-17T12:23:19.605738966Z" level=info msg="StopPodSandbox for \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\"" Jan 17 12:23:19.605907 containerd[1548]: time="2025-01-17T12:23:19.605888962Z" level=info msg="Ensure that sandbox 44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1 in task-service has been cleanup successfully" Jan 17 12:23:19.648664 containerd[1548]: time="2025-01-17T12:23:19.648589563Z" level=error msg="StopPodSandbox for \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\" failed" error="failed to destroy network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:19.658669 containerd[1548]: time="2025-01-17T12:23:19.658302603Z" level=error msg="StopPodSandbox for \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\" failed" error="failed to destroy network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:19.706552 kubelet[2717]: E0117 12:23:19.706296 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:19.706552 kubelet[2717]: E0117 12:23:19.706455 2717 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e"} Jan 17 12:23:19.706552 kubelet[2717]: E0117 12:23:19.706494 2717 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f7f2ce6-11ce-4b25-b85f-2f1455b73126\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:19.706552 kubelet[2717]: E0117 12:23:19.706525 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f7f2ce6-11ce-4b25-b85f-2f1455b73126\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d8gl2" podUID="0f7f2ce6-11ce-4b25-b85f-2f1455b73126" Jan 17 12:23:19.707216 kubelet[2717]: E0117 12:23:19.706940 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:19.707216 kubelet[2717]: E0117 12:23:19.706977 2717 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632"} Jan 17 12:23:19.707216 kubelet[2717]: E0117 12:23:19.707110 2717 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c85f2c0e-97a2-4021-ae15-4f96370ba9bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:19.707216 kubelet[2717]: E0117 12:23:19.707160 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c85f2c0e-97a2-4021-ae15-4f96370ba9bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rvkqr" podUID="c85f2c0e-97a2-4021-ae15-4f96370ba9bd" Jan 17 12:23:19.708888 containerd[1548]: time="2025-01-17T12:23:19.708763004Z" level=error msg="StopPodSandbox for \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\" failed" error="failed to destroy network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:19.710518 kubelet[2717]: E0117 12:23:19.710492 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:19.710590 kubelet[2717]: E0117 12:23:19.710525 2717 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944"} Jan 17 12:23:19.710590 kubelet[2717]: E0117 12:23:19.710555 2717 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf7e29e6-980f-42a1-85a4-fdb746002f5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:19.710590 kubelet[2717]: E0117 12:23:19.710580 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf7e29e6-980f-42a1-85a4-fdb746002f5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kf6mr" podUID="cf7e29e6-980f-42a1-85a4-fdb746002f5f" Jan 17 12:23:19.713383 containerd[1548]: time="2025-01-17T12:23:19.713275471Z" level=error msg="StopPodSandbox for \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\" failed" error="failed to destroy network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:19.717632 kubelet[2717]: E0117 12:23:19.717488 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:19.717632 kubelet[2717]: E0117 12:23:19.717524 2717 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93"} Jan 17 12:23:19.717632 kubelet[2717]: E0117 12:23:19.717555 2717 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff552645-5250-458a-ac60-d14e1b9bee85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:19.717632 kubelet[2717]: E0117 12:23:19.717590 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff552645-5250-458a-ac60-d14e1b9bee85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d99b78498-h976w" podUID="ff552645-5250-458a-ac60-d14e1b9bee85" Jan 17 12:23:19.717826 containerd[1548]: time="2025-01-17T12:23:19.717482064Z" level=error msg="StopPodSandbox for \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\" failed" error="failed to destroy network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:19.717866 kubelet[2717]: E0117 12:23:19.717674 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:19.717866 kubelet[2717]: E0117 12:23:19.717713 2717 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1"} Jan 17 12:23:19.717866 kubelet[2717]: E0117 12:23:19.717744 2717 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e256e234-13b1-4fd7-b50f-50db194b2888\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:19.717866 kubelet[2717]: E0117 12:23:19.717768 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e256e234-13b1-4fd7-b50f-50db194b2888\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9bdf448d6-lzvz2" podUID="e256e234-13b1-4fd7-b50f-50db194b2888" Jan 17 12:23:19.718089 containerd[1548]: time="2025-01-17T12:23:19.718050372Z" level=error msg="StopPodSandbox for \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\" failed" error="failed to destroy network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:19.718395 kubelet[2717]: E0117 12:23:19.718242 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:19.718395 kubelet[2717]: E0117 12:23:19.718271 2717 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570"} Jan 17 12:23:19.718395 kubelet[2717]: E0117 12:23:19.718299 2717 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85f28c23-7266-45b9-ab87-b4179415ea7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:19.718395 kubelet[2717]: E0117 12:23:19.718335 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85f28c23-7266-45b9-ab87-b4179415ea7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d99b78498-wqsg9" podUID="85f28c23-7266-45b9-ab87-b4179415ea7d" Jan 17 12:23:21.441967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816994198.mount: Deactivated successfully. Jan 17 12:23:21.691578 containerd[1548]: time="2025-01-17T12:23:21.691512323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 17 12:23:21.705708 containerd[1548]: time="2025-01-17T12:23:21.705566504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:21.708016 containerd[1548]: time="2025-01-17T12:23:21.707984100Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:21.708659 containerd[1548]: time="2025-01-17T12:23:21.708635688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:21.709437 containerd[1548]: time="2025-01-17T12:23:21.709404673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.119030271s" Jan 17 12:23:21.709531 containerd[1548]: time="2025-01-17T12:23:21.709437073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 17 12:23:21.724038 containerd[1548]: time="2025-01-17T12:23:21.723999484Z" level=info msg="CreateContainer within sandbox \"5fab4da34dbaf7c5130977784a45b784b169f00927fc59c7a87cc684898db2ac\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:23:21.737921 containerd[1548]: time="2025-01-17T12:23:21.737240720Z" level=info msg="CreateContainer within sandbox \"5fab4da34dbaf7c5130977784a45b784b169f00927fc59c7a87cc684898db2ac\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9404e24969c84ad98a77f5041dce48ae9ff2ac75058373078c8d6f1ce0747e05\"" Jan 17 12:23:21.738235 containerd[1548]: time="2025-01-17T12:23:21.738203622Z" level=info msg="StartContainer for \"9404e24969c84ad98a77f5041dce48ae9ff2ac75058373078c8d6f1ce0747e05\"" Jan 17 12:23:21.846860 containerd[1548]: time="2025-01-17T12:23:21.846811179Z" level=info msg="StartContainer for \"9404e24969c84ad98a77f5041dce48ae9ff2ac75058373078c8d6f1ce0747e05\" returns successfully" Jan 17 12:23:22.132170 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:23:22.132299 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:23:22.632960 kubelet[2717]: E0117 12:23:22.632935 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:23.633002 kubelet[2717]: I0117 12:23:23.632959 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:23.633882 kubelet[2717]: E0117 12:23:23.633849 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:26.165602 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:42628.service - OpenSSH per-connection server daemon (10.0.0.1:42628). Jan 17 12:23:26.199162 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 42628 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:26.200406 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:26.203816 systemd-logind[1520]: New session 8 of user core. Jan 17 12:23:26.217572 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:23:26.334324 sshd[4006]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:26.337816 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:42628.service: Deactivated successfully. Jan 17 12:23:26.339652 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:23:26.339728 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:23:26.341005 systemd-logind[1520]: Removed session 8. Jan 17 12:23:27.175925 kubelet[2717]: I0117 12:23:27.175872 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:27.177066 kubelet[2717]: E0117 12:23:27.176501 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:27.197135 kubelet[2717]: I0117 12:23:27.196742 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-448hz" podStartSLOduration=6.7463223469999996 podStartE2EDuration="15.196647105s" podCreationTimestamp="2025-01-17 12:23:12 +0000 UTC" firstStartedPulling="2025-01-17 12:23:13.259339991 +0000 UTC m=+24.872634790" lastFinishedPulling="2025-01-17 12:23:21.709664709 +0000 UTC m=+33.322959548" observedRunningTime="2025-01-17 12:23:22.645454625 +0000 UTC m=+34.258749464" watchObservedRunningTime="2025-01-17 12:23:27.196647105 +0000 UTC m=+38.809941984" Jan 17 12:23:27.544811 kubelet[2717]: I0117 12:23:27.544776 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:27.545569 kubelet[2717]: E0117 12:23:27.545553 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:27.587969 systemd[1]: run-containerd-runc-k8s.io-9404e24969c84ad98a77f5041dce48ae9ff2ac75058373078c8d6f1ce0747e05-runc.8yDlxF.mount: Deactivated successfully. Jan 17 12:23:27.641685 kubelet[2717]: E0117 12:23:27.641632 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:27.719381 kernel: bpftool[4133]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:23:27.894264 systemd-networkd[1230]: vxlan.calico: Link UP Jan 17 12:23:27.894273 systemd-networkd[1230]: vxlan.calico: Gained carrier Jan 17 12:23:29.383761 systemd-networkd[1230]: vxlan.calico: Gained IPv6LL Jan 17 12:23:31.348637 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:42636.service - OpenSSH per-connection server daemon (10.0.0.1:42636). Jan 17 12:23:31.383325 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 42636 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:31.384979 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:31.388984 systemd-logind[1520]: New session 9 of user core. Jan 17 12:23:31.396651 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:23:31.473673 containerd[1548]: time="2025-01-17T12:23:31.473617336Z" level=info msg="StopPodSandbox for \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\"" Jan 17 12:23:31.538491 sshd[4232]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:31.542880 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:42636.service: Deactivated successfully. Jan 17 12:23:31.543294 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:23:31.546204 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:23:31.548428 systemd-logind[1520]: Removed session 9. Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.568 [INFO][4261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.569 [INFO][4261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" iface="eth0" netns="/var/run/netns/cni-c880ddd7-751a-812c-b1e9-fd4fe7af0188" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.570 [INFO][4261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" iface="eth0" netns="/var/run/netns/cni-c880ddd7-751a-812c-b1e9-fd4fe7af0188" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.570 [INFO][4261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" iface="eth0" netns="/var/run/netns/cni-c880ddd7-751a-812c-b1e9-fd4fe7af0188" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.570 [INFO][4261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.573 [INFO][4261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.658 [INFO][4272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.658 [INFO][4272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.658 [INFO][4272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.674 [WARNING][4272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.674 [INFO][4272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.677 [INFO][4272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:31.683787 containerd[1548]: 2025-01-17 12:23:31.679 [INFO][4261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:31.683787 containerd[1548]: time="2025-01-17T12:23:31.681615034Z" level=info msg="TearDown network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\" successfully" Jan 17 12:23:31.683787 containerd[1548]: time="2025-01-17T12:23:31.681639634Z" level=info msg="StopPodSandbox for \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\" returns successfully" Jan 17 12:23:31.684041 systemd[1]: run-netns-cni\x2dc880ddd7\x2d751a\x2d812c\x2db1e9\x2dfd4fe7af0188.mount: Deactivated successfully. Jan 17 12:23:31.684867 containerd[1548]: time="2025-01-17T12:23:31.684822586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d99b78498-h976w,Uid:ff552645-5250-458a-ac60-d14e1b9bee85,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:23:31.863842 systemd-networkd[1230]: cali332eb49c52e: Link UP Jan 17 12:23:31.864819 systemd-networkd[1230]: cali332eb49c52e: Gained carrier Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.765 [INFO][4281] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0 calico-apiserver-5d99b78498- calico-apiserver ff552645-5250-458a-ac60-d14e1b9bee85 833 0 2025-01-17 12:23:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d99b78498 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d99b78498-h976w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali332eb49c52e [] []}} ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-h976w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--h976w-" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.765 [INFO][4281] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-h976w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.793 [INFO][4295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" HandleID="k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.811 [INFO][4295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" HandleID="k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a34d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d99b78498-h976w", "timestamp":"2025-01-17 12:23:31.793559323 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.811 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.811 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.811 [INFO][4295] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.814 [INFO][4295] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.836 [INFO][4295] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.842 [INFO][4295] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.844 [INFO][4295] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.846 [INFO][4295] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.846 [INFO][4295] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.848 [INFO][4295] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.852 [INFO][4295] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.857 [INFO][4295] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.857 [INFO][4295] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" host="localhost" Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.857 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:31.878498 containerd[1548]: 2025-01-17 12:23:31.857 [INFO][4295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" HandleID="k8s-pod-network.5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.879587 containerd[1548]: 2025-01-17 12:23:31.859 [INFO][4281] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-h976w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0", GenerateName:"calico-apiserver-5d99b78498-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff552645-5250-458a-ac60-d14e1b9bee85", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d99b78498", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d99b78498-h976w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali332eb49c52e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:31.879587 containerd[1548]: 2025-01-17 12:23:31.859 [INFO][4281] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-h976w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.879587 containerd[1548]: 2025-01-17 12:23:31.861 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali332eb49c52e ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-h976w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.879587 containerd[1548]: 2025-01-17 12:23:31.865 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-h976w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.879587 containerd[1548]: 2025-01-17 12:23:31.865 [INFO][4281] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-h976w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0", GenerateName:"calico-apiserver-5d99b78498-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff552645-5250-458a-ac60-d14e1b9bee85", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d99b78498", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac", Pod:"calico-apiserver-5d99b78498-h976w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali332eb49c52e", MAC:"92:84:64:e2:da:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:31.879587 containerd[1548]: 2025-01-17 12:23:31.876 [INFO][4281] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-h976w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:31.895224 containerd[1548]: time="2025-01-17T12:23:31.895064438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:31.895463 containerd[1548]: time="2025-01-17T12:23:31.895198678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:31.895699 containerd[1548]: time="2025-01-17T12:23:31.895583277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:31.895828 containerd[1548]: time="2025-01-17T12:23:31.895784636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:31.916756 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:23:31.933392 containerd[1548]: time="2025-01-17T12:23:31.933335825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d99b78498-h976w,Uid:ff552645-5250-458a-ac60-d14e1b9bee85,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac\"" Jan 17 12:23:31.935084 containerd[1548]: time="2025-01-17T12:23:31.934769822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:23:32.473963 containerd[1548]: time="2025-01-17T12:23:32.473739390Z" level=info msg="StopPodSandbox for \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\"" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.519 [INFO][4376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.519 [INFO][4376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" iface="eth0" netns="/var/run/netns/cni-1bde92ff-edc8-f303-ffde-35b88b1e046c" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.520 [INFO][4376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" iface="eth0" netns="/var/run/netns/cni-1bde92ff-edc8-f303-ffde-35b88b1e046c" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.520 [INFO][4376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" iface="eth0" netns="/var/run/netns/cni-1bde92ff-edc8-f303-ffde-35b88b1e046c" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.520 [INFO][4376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.520 [INFO][4376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.541 [INFO][4384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.541 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.541 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.549 [WARNING][4384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.549 [INFO][4384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.550 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:32.553998 containerd[1548]: 2025-01-17 12:23:32.552 [INFO][4376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:32.554489 containerd[1548]: time="2025-01-17T12:23:32.554133081Z" level=info msg="TearDown network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\" successfully" Jan 17 12:23:32.554489 containerd[1548]: time="2025-01-17T12:23:32.554171721Z" level=info msg="StopPodSandbox for \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\" returns successfully" Jan 17 12:23:32.554541 kubelet[2717]: E0117 12:23:32.554433 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:32.555056 containerd[1548]: time="2025-01-17T12:23:32.555029039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kf6mr,Uid:cf7e29e6-980f-42a1-85a4-fdb746002f5f,Namespace:kube-system,Attempt:1,}" Jan 17 12:23:32.686189 systemd[1]: run-netns-cni\x2d1bde92ff\x2dedc8\x2df303\x2dffde\x2d35b88b1e046c.mount: Deactivated successfully. Jan 17 12:23:32.707115 systemd-networkd[1230]: cali9527ee1c5bc: Link UP Jan 17 12:23:32.708094 systemd-networkd[1230]: cali9527ee1c5bc: Gained carrier Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.596 [INFO][4391] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--kf6mr-eth0 coredns-76f75df574- kube-system cf7e29e6-980f-42a1-85a4-fdb746002f5f 841 0 2025-01-17 12:23:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-kf6mr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9527ee1c5bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Namespace="kube-system" Pod="coredns-76f75df574-kf6mr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kf6mr-" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.596 [INFO][4391] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Namespace="kube-system" Pod="coredns-76f75df574-kf6mr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.626 [INFO][4405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" HandleID="k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.644 [INFO][4405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" HandleID="k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-kf6mr", "timestamp":"2025-01-17 12:23:32.626661231 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.644 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.644 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.644 [INFO][4405] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.649 [INFO][4405] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.662 [INFO][4405] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.668 [INFO][4405] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.674 [INFO][4405] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.679 [INFO][4405] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.679 [INFO][4405] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.680 [INFO][4405] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4 Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.685 [INFO][4405] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.696 [INFO][4405] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.698 [INFO][4405] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" host="localhost" Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.698 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:32.727636 containerd[1548]: 2025-01-17 12:23:32.698 [INFO][4405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" HandleID="k8s-pod-network.1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.728241 containerd[1548]: 2025-01-17 12:23:32.703 [INFO][4391] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Namespace="kube-system" Pod="coredns-76f75df574-kf6mr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kf6mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--kf6mr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cf7e29e6-980f-42a1-85a4-fdb746002f5f", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-kf6mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9527ee1c5bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:32.728241 containerd[1548]: 2025-01-17 12:23:32.703 [INFO][4391] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Namespace="kube-system" Pod="coredns-76f75df574-kf6mr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.728241 containerd[1548]: 2025-01-17 12:23:32.703 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9527ee1c5bc ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Namespace="kube-system" Pod="coredns-76f75df574-kf6mr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.728241 containerd[1548]: 2025-01-17 12:23:32.707 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Namespace="kube-system" Pod="coredns-76f75df574-kf6mr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.728241 containerd[1548]: 2025-01-17 12:23:32.708 [INFO][4391] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Namespace="kube-system" Pod="coredns-76f75df574-kf6mr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kf6mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--kf6mr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cf7e29e6-980f-42a1-85a4-fdb746002f5f", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4", Pod:"coredns-76f75df574-kf6mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9527ee1c5bc", MAC:"ca:78:9a:55:0b:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:32.728241 containerd[1548]: 2025-01-17 12:23:32.721 [INFO][4391] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4" Namespace="kube-system" Pod="coredns-76f75df574-kf6mr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:32.794283 containerd[1548]: time="2025-01-17T12:23:32.794204717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:32.794448 containerd[1548]: time="2025-01-17T12:23:32.794278277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:32.794448 containerd[1548]: time="2025-01-17T12:23:32.794297477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:32.794522 containerd[1548]: time="2025-01-17T12:23:32.794447796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:32.818430 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:23:32.837929 containerd[1548]: time="2025-01-17T12:23:32.837892814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kf6mr,Uid:cf7e29e6-980f-42a1-85a4-fdb746002f5f,Namespace:kube-system,Attempt:1,} returns sandbox id \"1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4\"" Jan 17 12:23:32.839274 kubelet[2717]: E0117 12:23:32.838772 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:32.841133 containerd[1548]: time="2025-01-17T12:23:32.841010687Z" level=info msg="CreateContainer within sandbox \"1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:23:32.930572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038908563.mount: Deactivated successfully. Jan 17 12:23:32.946621 containerd[1548]: time="2025-01-17T12:23:32.946404319Z" level=info msg="CreateContainer within sandbox \"1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68ec56f801c483e558b237e8c7b8216358eb5622b97fde453a8695fc9653fc64\"" Jan 17 12:23:32.947410 containerd[1548]: time="2025-01-17T12:23:32.947323157Z" level=info msg="StartContainer for \"68ec56f801c483e558b237e8c7b8216358eb5622b97fde453a8695fc9653fc64\"" Jan 17 12:23:33.014147 containerd[1548]: time="2025-01-17T12:23:33.013756001Z" level=info msg="StartContainer for \"68ec56f801c483e558b237e8c7b8216358eb5622b97fde453a8695fc9653fc64\" returns successfully" Jan 17 12:23:33.159862 systemd-networkd[1230]: cali332eb49c52e: Gained IPv6LL Jan 17 12:23:33.289499 containerd[1548]: time="2025-01-17T12:23:33.289226931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:33.292921 containerd[1548]: time="2025-01-17T12:23:33.292863763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 17 12:23:33.293822 containerd[1548]: time="2025-01-17T12:23:33.293772041Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:33.298610 containerd[1548]: time="2025-01-17T12:23:33.298565550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:33.299571 containerd[1548]: time="2025-01-17T12:23:33.299540787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.364740885s" Jan 17 12:23:33.299644 containerd[1548]: time="2025-01-17T12:23:33.299581267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:23:33.301211 containerd[1548]: time="2025-01-17T12:23:33.301162624Z" level=info msg="CreateContainer within sandbox \"5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:23:33.309305 containerd[1548]: time="2025-01-17T12:23:33.309257165Z" level=info msg="CreateContainer within sandbox \"5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"61803d0f6791a1c39084584549ae01c7bdf5f30052d4db007bc5f958248740ab\"" Jan 17 12:23:33.310383 containerd[1548]: time="2025-01-17T12:23:33.309766164Z" level=info msg="StartContainer for \"61803d0f6791a1c39084584549ae01c7bdf5f30052d4db007bc5f958248740ab\"" Jan 17 12:23:33.390791 containerd[1548]: time="2025-01-17T12:23:33.390677219Z" level=info msg="StartContainer for \"61803d0f6791a1c39084584549ae01c7bdf5f30052d4db007bc5f958248740ab\" returns successfully" Jan 17 12:23:33.473631 containerd[1548]: time="2025-01-17T12:23:33.473581869Z" level=info msg="StopPodSandbox for \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\"" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.516 [INFO][4574] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.517 [INFO][4574] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" iface="eth0" netns="/var/run/netns/cni-d4a77ad6-c032-d583-e338-542198c0b94d" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.517 [INFO][4574] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" iface="eth0" netns="/var/run/netns/cni-d4a77ad6-c032-d583-e338-542198c0b94d" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.518 [INFO][4574] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" iface="eth0" netns="/var/run/netns/cni-d4a77ad6-c032-d583-e338-542198c0b94d" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.518 [INFO][4574] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.518 [INFO][4574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.536 [INFO][4582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.538 [INFO][4582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.538 [INFO][4582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.547 [WARNING][4582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.547 [INFO][4582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.548 [INFO][4582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:33.552168 containerd[1548]: 2025-01-17 12:23:33.550 [INFO][4574] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:33.553005 containerd[1548]: time="2025-01-17T12:23:33.552281729Z" level=info msg="TearDown network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\" successfully" Jan 17 12:23:33.553005 containerd[1548]: time="2025-01-17T12:23:33.552305049Z" level=info msg="StopPodSandbox for \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\" returns successfully" Jan 17 12:23:33.553005 containerd[1548]: time="2025-01-17T12:23:33.552819088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9bdf448d6-lzvz2,Uid:e256e234-13b1-4fd7-b50f-50db194b2888,Namespace:calico-system,Attempt:1,}" Jan 17 12:23:33.658771 systemd-networkd[1230]: calia0ebdaf1f4b: Link UP Jan 17 12:23:33.658911 systemd-networkd[1230]: calia0ebdaf1f4b: Gained carrier Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.593 [INFO][4591] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0 calico-kube-controllers-9bdf448d6- calico-system e256e234-13b1-4fd7-b50f-50db194b2888 864 0 2025-01-17 12:23:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9bdf448d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9bdf448d6-lzvz2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia0ebdaf1f4b [] []}} ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Namespace="calico-system" Pod="calico-kube-controllers-9bdf448d6-lzvz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.593 [INFO][4591] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Namespace="calico-system" Pod="calico-kube-controllers-9bdf448d6-lzvz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.618 [INFO][4603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" HandleID="k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.629 [INFO][4603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" HandleID="k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aaba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9bdf448d6-lzvz2", "timestamp":"2025-01-17 12:23:33.618665417 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.629 [INFO][4603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.629 [INFO][4603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.629 [INFO][4603] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.630 [INFO][4603] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.634 [INFO][4603] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.637 [INFO][4603] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.640 [INFO][4603] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.642 [INFO][4603] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.643 [INFO][4603] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.644 [INFO][4603] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.648 [INFO][4603] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.654 [INFO][4603] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.654 [INFO][4603] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" host="localhost" Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.654 [INFO][4603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:33.674899 containerd[1548]: 2025-01-17 12:23:33.654 [INFO][4603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" HandleID="k8s-pod-network.6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.675856 containerd[1548]: 2025-01-17 12:23:33.656 [INFO][4591] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Namespace="calico-system" Pod="calico-kube-controllers-9bdf448d6-lzvz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0", GenerateName:"calico-kube-controllers-9bdf448d6-", Namespace:"calico-system", SelfLink:"", UID:"e256e234-13b1-4fd7-b50f-50db194b2888", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9bdf448d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9bdf448d6-lzvz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0ebdaf1f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:33.675856 containerd[1548]: 2025-01-17 12:23:33.656 [INFO][4591] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Namespace="calico-system" Pod="calico-kube-controllers-9bdf448d6-lzvz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.675856 containerd[1548]: 2025-01-17 12:23:33.656 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0ebdaf1f4b ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Namespace="calico-system" Pod="calico-kube-controllers-9bdf448d6-lzvz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.675856 containerd[1548]: 2025-01-17 12:23:33.658 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Namespace="calico-system" Pod="calico-kube-controllers-9bdf448d6-lzvz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.675856 containerd[1548]: 2025-01-17 12:23:33.661 [INFO][4591] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Namespace="calico-system" Pod="calico-kube-controllers-9bdf448d6-lzvz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0", GenerateName:"calico-kube-controllers-9bdf448d6-", Namespace:"calico-system", SelfLink:"", UID:"e256e234-13b1-4fd7-b50f-50db194b2888", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9bdf448d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de", Pod:"calico-kube-controllers-9bdf448d6-lzvz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0ebdaf1f4b", MAC:"3a:6c:fe:55:72:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:33.675856 containerd[1548]: 2025-01-17 12:23:33.672 [INFO][4591] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de" Namespace="calico-system" Pod="calico-kube-controllers-9bdf448d6-lzvz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:33.681953 kubelet[2717]: E0117 12:23:33.681440 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:33.695118 systemd[1]: run-netns-cni\x2dd4a77ad6\x2dc032\x2dd583\x2de338\x2d542198c0b94d.mount: Deactivated successfully. Jan 17 12:23:33.704985 containerd[1548]: time="2025-01-17T12:23:33.704888820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:33.704985 containerd[1548]: time="2025-01-17T12:23:33.704946259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:33.705171 containerd[1548]: time="2025-01-17T12:23:33.705133739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:33.705320 containerd[1548]: time="2025-01-17T12:23:33.705287139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:33.712745 kubelet[2717]: I0117 12:23:33.709948 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d99b78498-h976w" podStartSLOduration=20.344385764 podStartE2EDuration="21.709910688s" podCreationTimestamp="2025-01-17 12:23:12 +0000 UTC" firstStartedPulling="2025-01-17 12:23:31.934294823 +0000 UTC m=+43.547589662" lastFinishedPulling="2025-01-17 12:23:33.299819787 +0000 UTC m=+44.913114586" observedRunningTime="2025-01-17 12:23:33.697931556 +0000 UTC m=+45.311226395" watchObservedRunningTime="2025-01-17 12:23:33.709910688 +0000 UTC m=+45.323205527" Jan 17 12:23:33.739222 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:23:33.755218 containerd[1548]: time="2025-01-17T12:23:33.755185464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9bdf448d6-lzvz2,Uid:e256e234-13b1-4fd7-b50f-50db194b2888,Namespace:calico-system,Attempt:1,} returns sandbox id \"6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de\"" Jan 17 12:23:33.757772 containerd[1548]: time="2025-01-17T12:23:33.757592459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:23:34.473611 containerd[1548]: time="2025-01-17T12:23:34.473316050Z" level=info msg="StopPodSandbox for \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\"" Jan 17 12:23:34.473804 containerd[1548]: time="2025-01-17T12:23:34.473726409Z" level=info msg="StopPodSandbox for \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\"" Jan 17 12:23:34.479647 containerd[1548]: time="2025-01-17T12:23:34.479616436Z" level=info msg="StopPodSandbox for \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\"" Jan 17 12:23:34.529091 kubelet[2717]: I0117 12:23:34.529019 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kf6mr" podStartSLOduration=32.528938246 podStartE2EDuration="32.528938246s" podCreationTimestamp="2025-01-17 12:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:33.712578322 +0000 UTC m=+45.325873201" watchObservedRunningTime="2025-01-17 12:23:34.528938246 +0000 UTC m=+46.142233085" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.530 [INFO][4725] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.530 [INFO][4725] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" iface="eth0" netns="/var/run/netns/cni-740c9d5e-d63b-ad88-7075-d80c974991df" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.530 [INFO][4725] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" iface="eth0" netns="/var/run/netns/cni-740c9d5e-d63b-ad88-7075-d80c974991df" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.530 [INFO][4725] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" iface="eth0" netns="/var/run/netns/cni-740c9d5e-d63b-ad88-7075-d80c974991df" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.531 [INFO][4725] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.531 [INFO][4725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.568 [INFO][4738] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.569 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.569 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.578 [WARNING][4738] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.578 [INFO][4738] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.579 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:34.583562 containerd[1548]: 2025-01-17 12:23:34.581 [INFO][4725] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:34.587395 containerd[1548]: time="2025-01-17T12:23:34.585227080Z" level=info msg="TearDown network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\" successfully" Jan 17 12:23:34.587395 containerd[1548]: time="2025-01-17T12:23:34.585274240Z" level=info msg="StopPodSandbox for \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\" returns successfully" Jan 17 12:23:34.587395 containerd[1548]: time="2025-01-17T12:23:34.585958119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d8gl2,Uid:0f7f2ce6-11ce-4b25-b85f-2f1455b73126,Namespace:calico-system,Attempt:1,}" Jan 17 12:23:34.586660 systemd[1]: run-netns-cni\x2d740c9d5e\x2dd63b\x2dad88\x2d7075\x2dd80c974991df.mount: Deactivated successfully. Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.534 [INFO][4708] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.534 [INFO][4708] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" iface="eth0" netns="/var/run/netns/cni-815e7676-13d0-92d8-44e6-f528b5ed5b87" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.534 [INFO][4708] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" iface="eth0" netns="/var/run/netns/cni-815e7676-13d0-92d8-44e6-f528b5ed5b87" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.536 [INFO][4708] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" iface="eth0" netns="/var/run/netns/cni-815e7676-13d0-92d8-44e6-f528b5ed5b87" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.536 [INFO][4708] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.536 [INFO][4708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.572 [INFO][4739] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.572 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.579 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.591 [WARNING][4739] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.591 [INFO][4739] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.592 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:34.597024 containerd[1548]: 2025-01-17 12:23:34.595 [INFO][4708] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:34.597442 containerd[1548]: time="2025-01-17T12:23:34.597417293Z" level=info msg="TearDown network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\" successfully" Jan 17 12:23:34.597478 containerd[1548]: time="2025-01-17T12:23:34.597444173Z" level=info msg="StopPodSandbox for \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\" returns successfully" Jan 17 12:23:34.598071 containerd[1548]: time="2025-01-17T12:23:34.598042652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d99b78498-wqsg9,Uid:85f28c23-7266-45b9-ab87-b4179415ea7d,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:23:34.599974 systemd[1]: run-netns-cni\x2d815e7676\x2d13d0\x2d92d8\x2d44e6\x2df528b5ed5b87.mount: Deactivated successfully. Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.545 [INFO][4706] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.545 [INFO][4706] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" iface="eth0" netns="/var/run/netns/cni-0e979066-69f7-0049-29a1-022b54e89e1d" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.546 [INFO][4706] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" iface="eth0" netns="/var/run/netns/cni-0e979066-69f7-0049-29a1-022b54e89e1d" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.546 [INFO][4706] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" iface="eth0" netns="/var/run/netns/cni-0e979066-69f7-0049-29a1-022b54e89e1d" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.546 [INFO][4706] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.546 [INFO][4706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.577 [INFO][4751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.577 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.592 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.629 [WARNING][4751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.629 [INFO][4751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.631 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:34.634791 containerd[1548]: 2025-01-17 12:23:34.632 [INFO][4706] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:34.635183 containerd[1548]: time="2025-01-17T12:23:34.635016649Z" level=info msg="TearDown network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\" successfully" Jan 17 12:23:34.635183 containerd[1548]: time="2025-01-17T12:23:34.635040489Z" level=info msg="StopPodSandbox for \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\" returns successfully" Jan 17 12:23:34.635405 kubelet[2717]: E0117 12:23:34.635364 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:34.635877 containerd[1548]: time="2025-01-17T12:23:34.635840728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rvkqr,Uid:c85f2c0e-97a2-4021-ae15-4f96370ba9bd,Namespace:kube-system,Attempt:1,}" Jan 17 12:23:34.691146 kubelet[2717]: I0117 12:23:34.690539 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:34.691146 kubelet[2717]: E0117 12:23:34.691065 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:34.697898 systemd[1]: run-netns-cni\x2d0e979066\x2d69f7\x2d0049\x2d29a1\x2d022b54e89e1d.mount: Deactivated successfully. Jan 17 12:23:34.759528 systemd-networkd[1230]: cali9527ee1c5bc: Gained IPv6LL Jan 17 12:23:34.772988 systemd-networkd[1230]: cali1cbaecf3efa: Link UP Jan 17 12:23:34.773123 systemd-networkd[1230]: cali1cbaecf3efa: Gained carrier Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.685 [INFO][4770] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0 calico-apiserver-5d99b78498- calico-apiserver 85f28c23-7266-45b9-ab87-b4179415ea7d 887 0 2025-01-17 12:23:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d99b78498 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d99b78498-wqsg9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1cbaecf3efa [] []}} ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-wqsg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.686 [INFO][4770] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-wqsg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.720 [INFO][4808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" HandleID="k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.735 [INFO][4808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" HandleID="k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a91f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d99b78498-wqsg9", "timestamp":"2025-01-17 12:23:34.72012498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.735 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.735 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.735 [INFO][4808] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.738 [INFO][4808] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.746 [INFO][4808] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.750 [INFO][4808] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.751 [INFO][4808] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.753 [INFO][4808] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.754 [INFO][4808] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.755 [INFO][4808] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.758 [INFO][4808] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.765 [INFO][4808] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.765 [INFO][4808] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" host="localhost" Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.765 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:34.787209 containerd[1548]: 2025-01-17 12:23:34.765 [INFO][4808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" HandleID="k8s-pod-network.d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.787792 containerd[1548]: 2025-01-17 12:23:34.767 [INFO][4770] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-wqsg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0", GenerateName:"calico-apiserver-5d99b78498-", Namespace:"calico-apiserver", SelfLink:"", UID:"85f28c23-7266-45b9-ab87-b4179415ea7d", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d99b78498", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d99b78498-wqsg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cbaecf3efa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:34.787792 containerd[1548]: 2025-01-17 12:23:34.767 [INFO][4770] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-wqsg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.787792 containerd[1548]: 2025-01-17 12:23:34.768 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cbaecf3efa ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-wqsg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.787792 containerd[1548]: 2025-01-17 12:23:34.770 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-wqsg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.787792 containerd[1548]: 2025-01-17 12:23:34.774 [INFO][4770] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-wqsg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0", GenerateName:"calico-apiserver-5d99b78498-", Namespace:"calico-apiserver", SelfLink:"", UID:"85f28c23-7266-45b9-ab87-b4179415ea7d", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d99b78498", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d", Pod:"calico-apiserver-5d99b78498-wqsg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cbaecf3efa", MAC:"46:b4:5d:ec:d7:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:34.787792 containerd[1548]: 2025-01-17 12:23:34.784 [INFO][4770] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d" Namespace="calico-apiserver" Pod="calico-apiserver-5d99b78498-wqsg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:34.823807 systemd-networkd[1230]: cali49e0c930ce6: Link UP Jan 17 12:23:34.824629 systemd-networkd[1230]: cali49e0c930ce6: Gained carrier Jan 17 12:23:34.827857 containerd[1548]: time="2025-01-17T12:23:34.827205901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:34.828643 containerd[1548]: time="2025-01-17T12:23:34.828600938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:34.828911 containerd[1548]: time="2025-01-17T12:23:34.828628498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:34.831078 containerd[1548]: time="2025-01-17T12:23:34.831029453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.691 [INFO][4785] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--rvkqr-eth0 coredns-76f75df574- kube-system c85f2c0e-97a2-4021-ae15-4f96370ba9bd 888 0 2025-01-17 12:23:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-rvkqr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49e0c930ce6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Namespace="kube-system" Pod="coredns-76f75df574-rvkqr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rvkqr-" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.691 [INFO][4785] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Namespace="kube-system" Pod="coredns-76f75df574-rvkqr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.719 [INFO][4807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" HandleID="k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.740 [INFO][4807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" HandleID="k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8b30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-rvkqr", "timestamp":"2025-01-17 12:23:34.719724741 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.740 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.765 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.765 [INFO][4807] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.767 [INFO][4807] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.779 [INFO][4807] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.785 [INFO][4807] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.787 [INFO][4807] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.789 [INFO][4807] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.789 [INFO][4807] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.792 [INFO][4807] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103 Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.807 [INFO][4807] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.815 [INFO][4807] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.815 [INFO][4807] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" host="localhost" Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.815 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:34.846850 containerd[1548]: 2025-01-17 12:23:34.815 [INFO][4807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" HandleID="k8s-pod-network.aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.847492 containerd[1548]: 2025-01-17 12:23:34.818 [INFO][4785] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Namespace="kube-system" Pod="coredns-76f75df574-rvkqr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rvkqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rvkqr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c85f2c0e-97a2-4021-ae15-4f96370ba9bd", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-rvkqr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49e0c930ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:34.847492 containerd[1548]: 2025-01-17 12:23:34.818 [INFO][4785] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Namespace="kube-system" Pod="coredns-76f75df574-rvkqr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.847492 containerd[1548]: 2025-01-17 12:23:34.818 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49e0c930ce6 ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Namespace="kube-system" Pod="coredns-76f75df574-rvkqr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.847492 containerd[1548]: 2025-01-17 12:23:34.825 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Namespace="kube-system" Pod="coredns-76f75df574-rvkqr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.847492 containerd[1548]: 2025-01-17 12:23:34.825 [INFO][4785] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Namespace="kube-system" Pod="coredns-76f75df574-rvkqr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rvkqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rvkqr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c85f2c0e-97a2-4021-ae15-4f96370ba9bd", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103", Pod:"coredns-76f75df574-rvkqr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49e0c930ce6", MAC:"1e:d8:06:6d:83:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:34.847492 containerd[1548]: 2025-01-17 12:23:34.841 [INFO][4785] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103" Namespace="kube-system" Pod="coredns-76f75df574-rvkqr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:34.887509 systemd-networkd[1230]: cali0d478f83769: Link UP Jan 17 12:23:34.888435 systemd-networkd[1230]: cali0d478f83769: Gained carrier Jan 17 12:23:34.898519 containerd[1548]: time="2025-01-17T12:23:34.897734264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:34.898519 containerd[1548]: time="2025-01-17T12:23:34.897865024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:34.898519 containerd[1548]: time="2025-01-17T12:23:34.897928824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:34.898941 containerd[1548]: time="2025-01-17T12:23:34.898708542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.694 [INFO][4764] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--d8gl2-eth0 csi-node-driver- calico-system 0f7f2ce6-11ce-4b25-b85f-2f1455b73126 886 0 2025-01-17 12:23:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-d8gl2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0d478f83769 [] []}} ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Namespace="calico-system" Pod="csi-node-driver-d8gl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--d8gl2-" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.694 [INFO][4764] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Namespace="calico-system" Pod="csi-node-driver-d8gl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.733 [INFO][4817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" HandleID="k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.748 [INFO][4817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" HandleID="k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-d8gl2", "timestamp":"2025-01-17 12:23:34.733135271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.748 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.815 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.815 [INFO][4817] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.820 [INFO][4817] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.828 [INFO][4817] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.843 [INFO][4817] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.846 [INFO][4817] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.850 [INFO][4817] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.850 [INFO][4817] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.851 [INFO][4817] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727 Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.858 [INFO][4817] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.871 [INFO][4817] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.871 [INFO][4817] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" host="localhost" Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.871 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:34.911701 containerd[1548]: 2025-01-17 12:23:34.871 [INFO][4817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" HandleID="k8s-pod-network.50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.912224 containerd[1548]: 2025-01-17 12:23:34.883 [INFO][4764] cni-plugin/k8s.go 386: Populated endpoint ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Namespace="calico-system" Pod="csi-node-driver-d8gl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--d8gl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d8gl2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f7f2ce6-11ce-4b25-b85f-2f1455b73126", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-d8gl2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d478f83769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:34.912224 containerd[1548]: 2025-01-17 12:23:34.883 [INFO][4764] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Namespace="calico-system" Pod="csi-node-driver-d8gl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.912224 containerd[1548]: 2025-01-17 12:23:34.883 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d478f83769 ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Namespace="calico-system" Pod="csi-node-driver-d8gl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.912224 containerd[1548]: 2025-01-17 12:23:34.889 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Namespace="calico-system" Pod="csi-node-driver-d8gl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.912224 containerd[1548]: 2025-01-17 12:23:34.891 [INFO][4764] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Namespace="calico-system" Pod="csi-node-driver-d8gl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--d8gl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d8gl2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f7f2ce6-11ce-4b25-b85f-2f1455b73126", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727", Pod:"csi-node-driver-d8gl2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d478f83769", MAC:"ca:14:0d:fb:11:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:34.912224 containerd[1548]: 2025-01-17 12:23:34.907 [INFO][4764] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727" Namespace="calico-system" Pod="csi-node-driver-d8gl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:34.915303 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:23:34.939977 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:23:34.944777 containerd[1548]: time="2025-01-17T12:23:34.944741039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d99b78498-wqsg9,Uid:85f28c23-7266-45b9-ab87-b4179415ea7d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d\"" Jan 17 12:23:34.949826 containerd[1548]: time="2025-01-17T12:23:34.949796148Z" level=info msg="CreateContainer within sandbox \"d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:23:34.959758 containerd[1548]: time="2025-01-17T12:23:34.959689766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rvkqr,Uid:c85f2c0e-97a2-4021-ae15-4f96370ba9bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103\"" Jan 17 12:23:34.960186 kubelet[2717]: E0117 12:23:34.960170 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:34.961837 containerd[1548]: time="2025-01-17T12:23:34.961807881Z" level=info msg="CreateContainer within sandbox \"aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:23:35.041947 containerd[1548]: time="2025-01-17T12:23:35.041763786Z" level=info msg="CreateContainer within sandbox \"d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"df2805d409806da25734653b64f5d9790f1bdde779eaa3f3240248004bc94c9a\"" Jan 17 12:23:35.042246 containerd[1548]: time="2025-01-17T12:23:35.042203265Z" level=info msg="StartContainer for \"df2805d409806da25734653b64f5d9790f1bdde779eaa3f3240248004bc94c9a\"" Jan 17 12:23:35.045823 containerd[1548]: time="2025-01-17T12:23:35.045744897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:35.045928 containerd[1548]: time="2025-01-17T12:23:35.045802697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:35.045928 containerd[1548]: time="2025-01-17T12:23:35.045818017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:35.046051 containerd[1548]: time="2025-01-17T12:23:35.045913417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:35.048970 containerd[1548]: time="2025-01-17T12:23:35.048910370Z" level=info msg="CreateContainer within sandbox \"aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72b45e5e39817ef42bd6c15e7ad400bcc55d065996c64a0b9579b27cca3e1d6b\"" Jan 17 12:23:35.050738 containerd[1548]: time="2025-01-17T12:23:35.049913968Z" level=info msg="StartContainer for \"72b45e5e39817ef42bd6c15e7ad400bcc55d065996c64a0b9579b27cca3e1d6b\"" Jan 17 12:23:35.073457 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:23:35.098341 containerd[1548]: time="2025-01-17T12:23:35.098297663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d8gl2,Uid:0f7f2ce6-11ce-4b25-b85f-2f1455b73126,Namespace:calico-system,Attempt:1,} returns sandbox id \"50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727\"" Jan 17 12:23:35.116656 containerd[1548]: time="2025-01-17T12:23:35.116619223Z" level=info msg="StartContainer for \"72b45e5e39817ef42bd6c15e7ad400bcc55d065996c64a0b9579b27cca3e1d6b\" returns successfully" Jan 17 12:23:35.172717 containerd[1548]: time="2025-01-17T12:23:35.172602622Z" level=info msg="StartContainer for \"df2805d409806da25734653b64f5d9790f1bdde779eaa3f3240248004bc94c9a\" returns successfully" Jan 17 12:23:35.335515 systemd-networkd[1230]: calia0ebdaf1f4b: Gained IPv6LL Jan 17 12:23:35.422818 containerd[1548]: time="2025-01-17T12:23:35.422769039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:35.431422 containerd[1548]: time="2025-01-17T12:23:35.431381221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 17 12:23:35.434039 containerd[1548]: time="2025-01-17T12:23:35.433405416Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:35.436044 containerd[1548]: time="2025-01-17T12:23:35.435998291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:35.436722 containerd[1548]: time="2025-01-17T12:23:35.436688049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.67906427s" Jan 17 12:23:35.436822 containerd[1548]: time="2025-01-17T12:23:35.436805609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 17 12:23:35.437373 containerd[1548]: time="2025-01-17T12:23:35.437298448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:23:35.446972 containerd[1548]: time="2025-01-17T12:23:35.446939187Z" level=info msg="CreateContainer within sandbox \"6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:23:35.468489 containerd[1548]: time="2025-01-17T12:23:35.468434340Z" level=info msg="CreateContainer within sandbox \"6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"14e068d23f996a0ca6b8aeb9c72c31318cf8c3a6fd3ae8ccd6e541638930727e\"" Jan 17 12:23:35.468911 containerd[1548]: time="2025-01-17T12:23:35.468888179Z" level=info msg="StartContainer for \"14e068d23f996a0ca6b8aeb9c72c31318cf8c3a6fd3ae8ccd6e541638930727e\"" Jan 17 12:23:35.538042 containerd[1548]: time="2025-01-17T12:23:35.537908589Z" level=info msg="StartContainer for \"14e068d23f996a0ca6b8aeb9c72c31318cf8c3a6fd3ae8ccd6e541638930727e\" returns successfully" Jan 17 12:23:35.700564 kubelet[2717]: E0117 12:23:35.700444 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:35.706980 kubelet[2717]: E0117 12:23:35.706954 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:35.720615 kubelet[2717]: I0117 12:23:35.720578 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d99b78498-wqsg9" podStartSLOduration=23.720410754 podStartE2EDuration="23.720410754s" podCreationTimestamp="2025-01-17 12:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:35.710257376 +0000 UTC m=+47.323552295" watchObservedRunningTime="2025-01-17 12:23:35.720410754 +0000 UTC m=+47.333705593" Jan 17 12:23:35.720907 kubelet[2717]: I0117 12:23:35.720866 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9bdf448d6-lzvz2" podStartSLOduration=21.040989845 podStartE2EDuration="22.720831753s" podCreationTimestamp="2025-01-17 12:23:13 +0000 UTC" firstStartedPulling="2025-01-17 12:23:33.75733586 +0000 UTC m=+45.370630699" lastFinishedPulling="2025-01-17 12:23:35.437177768 +0000 UTC m=+47.050472607" observedRunningTime="2025-01-17 12:23:35.719162876 +0000 UTC m=+47.332457675" watchObservedRunningTime="2025-01-17 12:23:35.720831753 +0000 UTC m=+47.334126592" Jan 17 12:23:35.848636 systemd-networkd[1230]: cali1cbaecf3efa: Gained IPv6LL Jan 17 12:23:35.975615 systemd-networkd[1230]: cali0d478f83769: Gained IPv6LL Jan 17 12:23:36.124005 kubelet[2717]: I0117 12:23:36.123898 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:36.157597 kubelet[2717]: I0117 12:23:36.157107 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rvkqr" podStartSLOduration=34.157067775 podStartE2EDuration="34.157067775s" podCreationTimestamp="2025-01-17 12:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:35.732320728 +0000 UTC m=+47.345615647" watchObservedRunningTime="2025-01-17 12:23:36.157067775 +0000 UTC m=+47.770362574" Jan 17 12:23:36.168593 systemd-networkd[1230]: cali49e0c930ce6: Gained IPv6LL Jan 17 12:23:36.550895 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:57030.service - OpenSSH per-connection server daemon (10.0.0.1:57030). Jan 17 12:23:36.578423 containerd[1548]: time="2025-01-17T12:23:36.578367686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:36.579053 containerd[1548]: time="2025-01-17T12:23:36.578954644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 17 12:23:36.580279 containerd[1548]: time="2025-01-17T12:23:36.580250802Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:36.582203 containerd[1548]: time="2025-01-17T12:23:36.582079998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:36.583247 containerd[1548]: time="2025-01-17T12:23:36.583210235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.145880147s" Jan 17 12:23:36.583247 containerd[1548]: time="2025-01-17T12:23:36.583243115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 17 12:23:36.586143 containerd[1548]: time="2025-01-17T12:23:36.586081069Z" level=info msg="CreateContainer within sandbox \"50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:23:36.600334 containerd[1548]: time="2025-01-17T12:23:36.600297839Z" level=info msg="CreateContainer within sandbox \"50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f6dc93c0341467de6967843986409ea39b3f252aa6f18e1c36e86c9cc0b318e7\"" Jan 17 12:23:36.601588 containerd[1548]: time="2025-01-17T12:23:36.600803118Z" level=info msg="StartContainer for \"f6dc93c0341467de6967843986409ea39b3f252aa6f18e1c36e86c9cc0b318e7\"" Jan 17 12:23:36.618939 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 57030 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:36.620614 sshd[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:36.628491 systemd-logind[1520]: New session 10 of user core. Jan 17 12:23:36.631974 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:23:36.677767 containerd[1548]: time="2025-01-17T12:23:36.677716596Z" level=info msg="StartContainer for \"f6dc93c0341467de6967843986409ea39b3f252aa6f18e1c36e86c9cc0b318e7\" returns successfully" Jan 17 12:23:36.678879 containerd[1548]: time="2025-01-17T12:23:36.678841153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:23:36.710706 kubelet[2717]: I0117 12:23:36.710678 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:36.711778 kubelet[2717]: E0117 12:23:36.711464 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:36.711778 kubelet[2717]: I0117 12:23:36.711563 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:36.885200 sshd[5123]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:36.898603 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:57034.service - OpenSSH per-connection server daemon (10.0.0.1:57034). Jan 17 12:23:36.899004 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:57030.service: Deactivated successfully. Jan 17 12:23:36.902589 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:23:36.903646 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:23:36.904494 systemd-logind[1520]: Removed session 10. Jan 17 12:23:36.926869 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 57034 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:36.928203 sshd[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:36.932109 systemd-logind[1520]: New session 11 of user core. Jan 17 12:23:36.938630 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:23:37.143569 sshd[5173]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:37.150662 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:57036.service - OpenSSH per-connection server daemon (10.0.0.1:57036). Jan 17 12:23:37.151039 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:57034.service: Deactivated successfully. Jan 17 12:23:37.154039 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:23:37.159864 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:23:37.165974 systemd-logind[1520]: Removed session 11. Jan 17 12:23:37.193216 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 57036 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:37.194475 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:37.200829 systemd-logind[1520]: New session 12 of user core. Jan 17 12:23:37.209639 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:23:37.350395 sshd[5190]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:37.353479 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:57036.service: Deactivated successfully. Jan 17 12:23:37.355342 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:23:37.355431 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:23:37.356437 systemd-logind[1520]: Removed session 12. Jan 17 12:23:37.712183 kubelet[2717]: E0117 12:23:37.712108 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:37.737178 containerd[1548]: time="2025-01-17T12:23:37.737132159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:37.737720 containerd[1548]: time="2025-01-17T12:23:37.737675438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 17 12:23:37.738615 containerd[1548]: time="2025-01-17T12:23:37.738579356Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:37.740604 containerd[1548]: time="2025-01-17T12:23:37.740572392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:37.741207 containerd[1548]: time="2025-01-17T12:23:37.741169951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.062294918s" Jan 17 12:23:37.741251 containerd[1548]: time="2025-01-17T12:23:37.741204271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 17 12:23:37.742908 containerd[1548]: time="2025-01-17T12:23:37.742873587Z" level=info msg="CreateContainer within sandbox \"50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:23:37.762429 containerd[1548]: time="2025-01-17T12:23:37.762397307Z" level=info msg="CreateContainer within sandbox \"50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0ab6a9329f5c4469d3c58ba4069947b801b20d3d76a979cbbf0b7faead8788a1\"" Jan 17 12:23:37.762972 containerd[1548]: time="2025-01-17T12:23:37.762921946Z" level=info msg="StartContainer for \"0ab6a9329f5c4469d3c58ba4069947b801b20d3d76a979cbbf0b7faead8788a1\"" Jan 17 12:23:37.828559 containerd[1548]: time="2025-01-17T12:23:37.828513811Z" level=info msg="StartContainer for \"0ab6a9329f5c4469d3c58ba4069947b801b20d3d76a979cbbf0b7faead8788a1\" returns successfully" Jan 17 12:23:38.583165 kubelet[2717]: I0117 12:23:38.583115 2717 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:23:38.583165 kubelet[2717]: I0117 12:23:38.583178 2717 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:23:38.728113 kubelet[2717]: I0117 12:23:38.728067 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-d8gl2" podStartSLOduration=24.085934071 podStartE2EDuration="26.72802732s" podCreationTimestamp="2025-01-17 12:23:12 +0000 UTC" firstStartedPulling="2025-01-17 12:23:35.099438661 +0000 UTC m=+46.712733500" lastFinishedPulling="2025-01-17 12:23:37.74153195 +0000 UTC m=+49.354826749" observedRunningTime="2025-01-17 12:23:38.726086124 +0000 UTC m=+50.339380963" watchObservedRunningTime="2025-01-17 12:23:38.72802732 +0000 UTC m=+50.341322159" Jan 17 12:23:42.365578 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:57048.service - OpenSSH per-connection server daemon (10.0.0.1:57048). Jan 17 12:23:42.409681 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 57048 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:42.411562 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:42.415421 systemd-logind[1520]: New session 13 of user core. Jan 17 12:23:42.420698 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:23:42.634630 sshd[5257]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:42.645620 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:51470.service - OpenSSH per-connection server daemon (10.0.0.1:51470). Jan 17 12:23:42.646415 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:57048.service: Deactivated successfully. Jan 17 12:23:42.648198 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:23:42.648910 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:23:42.650343 systemd-logind[1520]: Removed session 13. Jan 17 12:23:42.675506 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 51470 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:42.676678 sshd[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:42.684583 systemd-logind[1520]: New session 14 of user core. Jan 17 12:23:42.697669 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:23:42.901343 sshd[5269]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:42.909687 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:51482.service - OpenSSH per-connection server daemon (10.0.0.1:51482). Jan 17 12:23:42.910039 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:51470.service: Deactivated successfully. Jan 17 12:23:42.913794 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:23:42.914312 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:23:42.915933 systemd-logind[1520]: Removed session 14. Jan 17 12:23:42.939431 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 51482 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:42.940523 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:42.944506 systemd-logind[1520]: New session 15 of user core. Jan 17 12:23:42.955748 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:23:44.471728 sshd[5283]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:44.480547 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:51496.service - OpenSSH per-connection server daemon (10.0.0.1:51496). Jan 17 12:23:44.481694 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:51482.service: Deactivated successfully. Jan 17 12:23:44.488050 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:23:44.489896 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:23:44.493209 systemd-logind[1520]: Removed session 15. Jan 17 12:23:44.525628 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 51496 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:44.526852 sshd[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:44.530963 systemd-logind[1520]: New session 16 of user core. Jan 17 12:23:44.542654 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:23:44.879006 sshd[5304]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:44.885768 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:51500.service - OpenSSH per-connection server daemon (10.0.0.1:51500). Jan 17 12:23:44.886172 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:51496.service: Deactivated successfully. Jan 17 12:23:44.890094 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:23:44.892862 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:23:44.895094 systemd-logind[1520]: Removed session 16. Jan 17 12:23:44.914747 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 51500 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:44.916776 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:44.921750 systemd-logind[1520]: New session 17 of user core. Jan 17 12:23:44.933986 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:23:45.077270 sshd[5318]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:45.081357 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:51500.service: Deactivated successfully. Jan 17 12:23:45.083516 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:23:45.083971 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:23:45.084807 systemd-logind[1520]: Removed session 17. Jan 17 12:23:45.269460 kubelet[2717]: I0117 12:23:45.268890 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:48.468908 containerd[1548]: time="2025-01-17T12:23:48.468829925Z" level=info msg="StopPodSandbox for \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\"" Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.505 [WARNING][5401] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--kf6mr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cf7e29e6-980f-42a1-85a4-fdb746002f5f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4", Pod:"coredns-76f75df574-kf6mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9527ee1c5bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.506 [INFO][5401] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.506 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" iface="eth0" netns="" Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.506 [INFO][5401] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.506 [INFO][5401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.525 [INFO][5409] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.525 [INFO][5409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.525 [INFO][5409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.534 [WARNING][5409] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.534 [INFO][5409] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.535 [INFO][5409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:48.539198 containerd[1548]: 2025-01-17 12:23:48.537 [INFO][5401] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:48.539644 containerd[1548]: time="2025-01-17T12:23:48.539241056Z" level=info msg="TearDown network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\" successfully" Jan 17 12:23:48.539644 containerd[1548]: time="2025-01-17T12:23:48.539265656Z" level=info msg="StopPodSandbox for \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\" returns successfully" Jan 17 12:23:48.540211 containerd[1548]: time="2025-01-17T12:23:48.540185055Z" level=info msg="RemovePodSandbox for \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\"" Jan 17 12:23:48.549997 containerd[1548]: time="2025-01-17T12:23:48.549950119Z" level=info msg="Forcibly stopping sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\"" Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.595 [WARNING][5431] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--kf6mr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cf7e29e6-980f-42a1-85a4-fdb746002f5f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1aa69fb518a931a2e0327b09e30b3bd0d6ab57b4abf918aebdc3f02bf5ee7ed4", Pod:"coredns-76f75df574-kf6mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9527ee1c5bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.595 [INFO][5431] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.595 [INFO][5431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" iface="eth0" netns="" Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.595 [INFO][5431] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.595 [INFO][5431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.612 [INFO][5439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.613 [INFO][5439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.613 [INFO][5439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.620 [WARNING][5439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.620 [INFO][5439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" HandleID="k8s-pod-network.ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Workload="localhost-k8s-coredns--76f75df574--kf6mr-eth0" Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.621 [INFO][5439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:48.625154 containerd[1548]: 2025-01-17 12:23:48.623 [INFO][5431] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944" Jan 17 12:23:48.625577 containerd[1548]: time="2025-01-17T12:23:48.625186883Z" level=info msg="TearDown network for sandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\" successfully" Jan 17 12:23:48.634841 containerd[1548]: time="2025-01-17T12:23:48.634795948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:48.634888 containerd[1548]: time="2025-01-17T12:23:48.634878228Z" level=info msg="RemovePodSandbox \"ab7b3801b62d55de0573ab8496d50c8ec4ea027fba01f5e43c10d8963a1a0944\" returns successfully" Jan 17 12:23:48.635657 containerd[1548]: time="2025-01-17T12:23:48.635408987Z" level=info msg="StopPodSandbox for \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\"" Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.668 [WARNING][5462] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rvkqr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c85f2c0e-97a2-4021-ae15-4f96370ba9bd", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103", Pod:"coredns-76f75df574-rvkqr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49e0c930ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.669 [INFO][5462] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.669 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" iface="eth0" netns="" Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.669 [INFO][5462] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.669 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.689 [INFO][5469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.690 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.690 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.697 [WARNING][5469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.697 [INFO][5469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.699 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:48.702615 containerd[1548]: 2025-01-17 12:23:48.700 [INFO][5462] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:48.702615 containerd[1548]: time="2025-01-17T12:23:48.702604483Z" level=info msg="TearDown network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\" successfully" Jan 17 12:23:48.703030 containerd[1548]: time="2025-01-17T12:23:48.702634282Z" level=info msg="StopPodSandbox for \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\" returns successfully" Jan 17 12:23:48.703191 containerd[1548]: time="2025-01-17T12:23:48.703092082Z" level=info msg="RemovePodSandbox for \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\"" Jan 17 12:23:48.703191 containerd[1548]: time="2025-01-17T12:23:48.703181602Z" level=info msg="Forcibly stopping sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\"" Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.744 [WARNING][5491] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rvkqr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c85f2c0e-97a2-4021-ae15-4f96370ba9bd", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aafc8ef2a974fbc6312a18f30e759a636a35a326588cbf971a9c950304ec6103", Pod:"coredns-76f75df574-rvkqr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49e0c930ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.744 [INFO][5491] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.744 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" iface="eth0" netns="" Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.744 [INFO][5491] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.744 [INFO][5491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.763 [INFO][5499] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.763 [INFO][5499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.763 [INFO][5499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.771 [WARNING][5499] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.771 [INFO][5499] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" HandleID="k8s-pod-network.85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Workload="localhost-k8s-coredns--76f75df574--rvkqr-eth0" Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.772 [INFO][5499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:48.776131 containerd[1548]: 2025-01-17 12:23:48.774 [INFO][5491] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632" Jan 17 12:23:48.776131 containerd[1548]: time="2025-01-17T12:23:48.776112808Z" level=info msg="TearDown network for sandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\" successfully" Jan 17 12:23:48.778996 containerd[1548]: time="2025-01-17T12:23:48.778961684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:48.779047 containerd[1548]: time="2025-01-17T12:23:48.779014764Z" level=info msg="RemovePodSandbox \"85475dae281093367a54e3d7e5299dd1c0c2c0881e9181a9208c08a128867632\" returns successfully" Jan 17 12:23:48.779500 containerd[1548]: time="2025-01-17T12:23:48.779477043Z" level=info msg="StopPodSandbox for \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\"" Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.819 [WARNING][5523] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0", GenerateName:"calico-apiserver-5d99b78498-", Namespace:"calico-apiserver", SelfLink:"", UID:"85f28c23-7266-45b9-ab87-b4179415ea7d", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d99b78498", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d", Pod:"calico-apiserver-5d99b78498-wqsg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cbaecf3efa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.819 [INFO][5523] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.819 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" iface="eth0" netns="" Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.819 [INFO][5523] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.819 [INFO][5523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.837 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.837 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.837 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.845 [WARNING][5531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.845 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.846 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:48.849751 containerd[1548]: 2025-01-17 12:23:48.848 [INFO][5523] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:48.850269 containerd[1548]: time="2025-01-17T12:23:48.849788894Z" level=info msg="TearDown network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\" successfully" Jan 17 12:23:48.850269 containerd[1548]: time="2025-01-17T12:23:48.849816294Z" level=info msg="StopPodSandbox for \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\" returns successfully" Jan 17 12:23:48.851614 containerd[1548]: time="2025-01-17T12:23:48.851573931Z" level=info msg="RemovePodSandbox for \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\"" Jan 17 12:23:48.851671 containerd[1548]: time="2025-01-17T12:23:48.851619611Z" level=info msg="Forcibly stopping sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\"" Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.885 [WARNING][5554] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0", GenerateName:"calico-apiserver-5d99b78498-", Namespace:"calico-apiserver", SelfLink:"", UID:"85f28c23-7266-45b9-ab87-b4179415ea7d", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d99b78498", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d89ddd62de8a1ac7a0c6fdec99c7f5da262ce1eb2d7984b353171448d54e6e8d", Pod:"calico-apiserver-5d99b78498-wqsg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cbaecf3efa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.885 [INFO][5554] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.885 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" iface="eth0" netns="" Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.885 [INFO][5554] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.885 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.906 [INFO][5561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.906 [INFO][5561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.906 [INFO][5561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.914 [WARNING][5561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.914 [INFO][5561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" HandleID="k8s-pod-network.7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Workload="localhost-k8s-calico--apiserver--5d99b78498--wqsg9-eth0" Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.915 [INFO][5561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:48.919008 containerd[1548]: 2025-01-17 12:23:48.917 [INFO][5554] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570" Jan 17 12:23:48.919463 containerd[1548]: time="2025-01-17T12:23:48.919044027Z" level=info msg="TearDown network for sandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\" successfully" Jan 17 12:23:48.924245 containerd[1548]: time="2025-01-17T12:23:48.924201859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:48.924339 containerd[1548]: time="2025-01-17T12:23:48.924267138Z" level=info msg="RemovePodSandbox \"7c6f7aebe8e47fdebf5e4226c93fd1dba6a8861d79bda43cde364e68ce4b5570\" returns successfully" Jan 17 12:23:48.924919 containerd[1548]: time="2025-01-17T12:23:48.924670498Z" level=info msg="StopPodSandbox for \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\"" Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.966 [WARNING][5583] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0", GenerateName:"calico-kube-controllers-9bdf448d6-", Namespace:"calico-system", SelfLink:"", UID:"e256e234-13b1-4fd7-b50f-50db194b2888", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9bdf448d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de", Pod:"calico-kube-controllers-9bdf448d6-lzvz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0ebdaf1f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.966 [INFO][5583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.966 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" iface="eth0" netns="" Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.966 [INFO][5583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.966 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.986 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.986 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.986 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.993 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.993 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.995 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:48.998981 containerd[1548]: 2025-01-17 12:23:48.997 [INFO][5583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:48.999961 containerd[1548]: time="2025-01-17T12:23:48.999018022Z" level=info msg="TearDown network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\" successfully" Jan 17 12:23:48.999961 containerd[1548]: time="2025-01-17T12:23:48.999052902Z" level=info msg="StopPodSandbox for \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\" returns successfully" Jan 17 12:23:48.999961 containerd[1548]: time="2025-01-17T12:23:48.999526982Z" level=info msg="RemovePodSandbox for \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\"" Jan 17 12:23:48.999961 containerd[1548]: time="2025-01-17T12:23:48.999559782Z" level=info msg="Forcibly stopping sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\"" Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.037 [WARNING][5615] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0", GenerateName:"calico-kube-controllers-9bdf448d6-", Namespace:"calico-system", SelfLink:"", UID:"e256e234-13b1-4fd7-b50f-50db194b2888", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9bdf448d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b53e41618101489948fcd4c924f83c241d0b7ddeb5bc62be2ef08ae4e9816de", Pod:"calico-kube-controllers-9bdf448d6-lzvz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0ebdaf1f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.037 [INFO][5615] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.037 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" iface="eth0" netns="" Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.037 [INFO][5615] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.037 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.054 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.054 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.054 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.065 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.065 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" HandleID="k8s-pod-network.44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Workload="localhost-k8s-calico--kube--controllers--9bdf448d6--lzvz2-eth0" Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.066 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:49.069700 containerd[1548]: 2025-01-17 12:23:49.067 [INFO][5615] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1" Jan 17 12:23:49.070103 containerd[1548]: time="2025-01-17T12:23:49.069748115Z" level=info msg="TearDown network for sandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\" successfully" Jan 17 12:23:49.072544 containerd[1548]: time="2025-01-17T12:23:49.072497551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:49.072624 containerd[1548]: time="2025-01-17T12:23:49.072568911Z" level=info msg="RemovePodSandbox \"44a987f7b6aab41fa1c4bfe7b24bc44295e6e35bb8ea5cc5d144441a34448bc1\" returns successfully" Jan 17 12:23:49.074382 containerd[1548]: time="2025-01-17T12:23:49.072999830Z" level=info msg="StopPodSandbox for \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\"" Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.109 [WARNING][5646] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0", GenerateName:"calico-apiserver-5d99b78498-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff552645-5250-458a-ac60-d14e1b9bee85", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d99b78498", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac", Pod:"calico-apiserver-5d99b78498-h976w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali332eb49c52e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.109 [INFO][5646] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.109 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" iface="eth0" netns="" Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.109 [INFO][5646] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.109 [INFO][5646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.127 [INFO][5653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.127 [INFO][5653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.127 [INFO][5653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.137 [WARNING][5653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.137 [INFO][5653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.138 [INFO][5653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:49.141916 containerd[1548]: 2025-01-17 12:23:49.140 [INFO][5646] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:49.142304 containerd[1548]: time="2025-01-17T12:23:49.141946846Z" level=info msg="TearDown network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\" successfully" Jan 17 12:23:49.142304 containerd[1548]: time="2025-01-17T12:23:49.141973206Z" level=info msg="StopPodSandbox for \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\" returns successfully" Jan 17 12:23:49.142733 containerd[1548]: time="2025-01-17T12:23:49.142464365Z" level=info msg="RemovePodSandbox for \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\"" Jan 17 12:23:49.142733 containerd[1548]: time="2025-01-17T12:23:49.142503245Z" level=info msg="Forcibly stopping sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\"" Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.177 [WARNING][5676] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0", GenerateName:"calico-apiserver-5d99b78498-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff552645-5250-458a-ac60-d14e1b9bee85", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d99b78498", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5df787a6871dc0ce11233dbb2630bdb25f743f4ec821f2e55029c878668837ac", Pod:"calico-apiserver-5d99b78498-h976w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali332eb49c52e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.177 [INFO][5676] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.177 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" iface="eth0" netns="" Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.177 [INFO][5676] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.177 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.196 [INFO][5684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.196 [INFO][5684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.196 [INFO][5684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.204 [WARNING][5684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.204 [INFO][5684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" HandleID="k8s-pod-network.0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Workload="localhost-k8s-calico--apiserver--5d99b78498--h976w-eth0" Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.205 [INFO][5684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:49.210330 containerd[1548]: 2025-01-17 12:23:49.207 [INFO][5676] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93" Jan 17 12:23:49.210330 containerd[1548]: time="2025-01-17T12:23:49.209162344Z" level=info msg="TearDown network for sandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\" successfully" Jan 17 12:23:49.218020 containerd[1548]: time="2025-01-17T12:23:49.217988051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:49.218167 containerd[1548]: time="2025-01-17T12:23:49.218149651Z" level=info msg="RemovePodSandbox \"0040d7daf9b507315ecaeceaa5b2b32a80b5a5313f067b67e5272fbdc59b3a93\" returns successfully" Jan 17 12:23:49.218710 containerd[1548]: time="2025-01-17T12:23:49.218683250Z" level=info msg="StopPodSandbox for \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\"" Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.252 [WARNING][5707] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d8gl2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f7f2ce6-11ce-4b25-b85f-2f1455b73126", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727", Pod:"csi-node-driver-d8gl2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d478f83769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.252 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.252 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" iface="eth0" netns="" Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.252 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.252 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.270 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.270 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.270 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.279 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.279 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.280 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:49.283878 containerd[1548]: 2025-01-17 12:23:49.282 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:49.283878 containerd[1548]: time="2025-01-17T12:23:49.283742791Z" level=info msg="TearDown network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\" successfully" Jan 17 12:23:49.283878 containerd[1548]: time="2025-01-17T12:23:49.283766151Z" level=info msg="StopPodSandbox for \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\" returns successfully" Jan 17 12:23:49.284297 containerd[1548]: time="2025-01-17T12:23:49.284202191Z" level=info msg="RemovePodSandbox for \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\"" Jan 17 12:23:49.284297 containerd[1548]: time="2025-01-17T12:23:49.284230431Z" level=info msg="Forcibly stopping sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\"" Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.317 [WARNING][5737] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d8gl2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f7f2ce6-11ce-4b25-b85f-2f1455b73126", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50303a39c30d5d187d67277aac95c232ccb3a932295916b82f4e5fa6834f4727", Pod:"csi-node-driver-d8gl2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d478f83769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.317 [INFO][5737] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.317 [INFO][5737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" iface="eth0" netns="" Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.317 [INFO][5737] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.317 [INFO][5737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.340 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.341 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.341 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.348 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.348 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" HandleID="k8s-pod-network.0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Workload="localhost-k8s-csi--node--driver--d8gl2-eth0" Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.350 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:49.353511 containerd[1548]: 2025-01-17 12:23:49.351 [INFO][5737] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e" Jan 17 12:23:49.353511 containerd[1548]: time="2025-01-17T12:23:49.353477926Z" level=info msg="TearDown network for sandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\" successfully" Jan 17 12:23:49.360449 containerd[1548]: time="2025-01-17T12:23:49.360398315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:49.360541 containerd[1548]: time="2025-01-17T12:23:49.360455635Z" level=info msg="RemovePodSandbox \"0c49679fb98a69a6cbebd3c05241103a03b39e1bca53c0a43dd8524eb750c10e\" returns successfully" Jan 17 12:23:50.085607 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:51504.service - OpenSSH per-connection server daemon (10.0.0.1:51504). Jan 17 12:23:50.121758 sshd[5752]: Accepted publickey for core from 10.0.0.1 port 51504 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:50.122998 sshd[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:50.126874 systemd-logind[1520]: New session 18 of user core. Jan 17 12:23:50.139759 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:23:50.262206 sshd[5752]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:50.264779 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:51504.service: Deactivated successfully. Jan 17 12:23:50.267305 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:23:50.267482 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:23:50.269579 systemd-logind[1520]: Removed session 18. Jan 17 12:23:55.276636 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:35092.service - OpenSSH per-connection server daemon (10.0.0.1:35092). Jan 17 12:23:55.305480 sshd[5769]: Accepted publickey for core from 10.0.0.1 port 35092 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:23:55.306631 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:55.312986 systemd-logind[1520]: New session 19 of user core. Jan 17 12:23:55.323715 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:23:55.443764 sshd[5769]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:55.447174 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:35092.service: Deactivated successfully. Jan 17 12:23:55.448986 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:23:55.449083 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:23:55.451231 systemd-logind[1520]: Removed session 19. Jan 17 12:23:57.620613 kubelet[2717]: E0117 12:23:57.620562 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:00.454685 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:35098.service - OpenSSH per-connection server daemon (10.0.0.1:35098). Jan 17 12:24:00.490432 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 35098 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:24:00.491654 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:00.495423 systemd-logind[1520]: New session 20 of user core. Jan 17 12:24:00.504755 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:24:00.687159 sshd[5828]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:00.690586 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:35098.service: Deactivated successfully. Jan 17 12:24:00.693154 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:24:00.694347 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:24:00.695957 systemd-logind[1520]: Removed session 20.