Jan 17 11:58:58.895650 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 11:58:58.895672 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 11:58:58.895687 kernel: KASLR enabled Jan 17 11:58:58.895693 kernel: efi: EFI v2.7 by EDK II Jan 17 11:58:58.895699 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 17 11:58:58.895704 kernel: random: crng init done Jan 17 11:58:58.895712 kernel: ACPI: Early table checksum verification disabled Jan 17 11:58:58.895717 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 17 11:58:58.895730 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 17 11:58:58.895739 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895745 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895751 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895757 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895763 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895770 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895778 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895785 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895791 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:58:58.895797 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 17 11:58:58.895803 kernel: NUMA: Failed to initialise from firmware Jan 17 11:58:58.895810 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:58:58.895816 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 17 11:58:58.895822 kernel: Zone ranges: Jan 17 11:58:58.895828 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:58:58.895835 kernel: DMA32 empty Jan 17 11:58:58.895842 kernel: Normal empty Jan 17 11:58:58.895848 kernel: Movable zone start for each node Jan 17 11:58:58.895855 kernel: Early memory node ranges Jan 17 11:58:58.895861 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 17 11:58:58.895867 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 17 11:58:58.895873 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 17 11:58:58.895880 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 17 11:58:58.895886 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 17 11:58:58.895893 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 17 11:58:58.895899 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 17 11:58:58.895905 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:58:58.895911 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 17 11:58:58.895919 kernel: psci: probing for conduit method from ACPI. Jan 17 11:58:58.895925 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 11:58:58.895932 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 11:58:58.895941 kernel: psci: Trusted OS migration not required Jan 17 11:58:58.895948 kernel: psci: SMC Calling Convention v1.1 Jan 17 11:58:58.895955 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 11:58:58.895963 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 11:58:58.895970 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 11:58:58.895977 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 17 11:58:58.895984 kernel: Detected PIPT I-cache on CPU0 Jan 17 11:58:58.895990 kernel: CPU features: detected: GIC system register CPU interface Jan 17 11:58:58.895997 kernel: CPU features: detected: Hardware dirty bit management Jan 17 11:58:58.896004 kernel: CPU features: detected: Spectre-v4 Jan 17 11:58:58.896010 kernel: CPU features: detected: Spectre-BHB Jan 17 11:58:58.896017 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 11:58:58.896024 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 11:58:58.896032 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 11:58:58.896038 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 11:58:58.896045 kernel: alternatives: applying boot alternatives Jan 17 11:58:58.896053 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 11:58:58.896060 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 11:58:58.896067 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 11:58:58.896073 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 11:58:58.896080 kernel: Fallback order for Node 0: 0 Jan 17 11:58:58.896087 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 17 11:58:58.896093 kernel: Policy zone: DMA Jan 17 11:58:58.896100 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 11:58:58.896108 kernel: software IO TLB: area num 4. Jan 17 11:58:58.896115 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 17 11:58:58.896122 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 17 11:58:58.896130 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 11:58:58.896137 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 11:58:58.896144 kernel: rcu: RCU event tracing is enabled. Jan 17 11:58:58.896151 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 11:58:58.896157 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 11:58:58.896177 kernel: Tracing variant of Tasks RCU enabled. Jan 17 11:58:58.896191 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 11:58:58.896198 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 11:58:58.896205 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 11:58:58.896214 kernel: GICv3: 256 SPIs implemented Jan 17 11:58:58.896221 kernel: GICv3: 0 Extended SPIs implemented Jan 17 11:58:58.896228 kernel: Root IRQ handler: gic_handle_irq Jan 17 11:58:58.896235 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 11:58:58.896241 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 11:58:58.896248 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 11:58:58.896255 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 11:58:58.896262 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 11:58:58.896269 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 17 11:58:58.896276 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 17 11:58:58.896283 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 11:58:58.896292 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:58:58.896298 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 11:58:58.896305 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 11:58:58.896313 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 11:58:58.896319 kernel: arm-pv: using stolen time PV Jan 17 11:58:58.896326 kernel: Console: colour dummy device 80x25 Jan 17 11:58:58.896333 kernel: ACPI: Core revision 20230628 Jan 17 11:58:58.896340 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 11:58:58.896347 kernel: pid_max: default: 32768 minimum: 301 Jan 17 11:58:58.896354 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 11:58:58.896363 kernel: landlock: Up and running. Jan 17 11:58:58.896370 kernel: SELinux: Initializing. Jan 17 11:58:58.896376 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 11:58:58.896383 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 11:58:58.896390 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 11:58:58.896397 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 11:58:58.896405 kernel: rcu: Hierarchical SRCU implementation. Jan 17 11:58:58.896412 kernel: rcu: Max phase no-delay instances is 400. Jan 17 11:58:58.896419 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 11:58:58.896430 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 11:58:58.896439 kernel: Remapping and enabling EFI services. Jan 17 11:58:58.896448 kernel: smp: Bringing up secondary CPUs ... Jan 17 11:58:58.896455 kernel: Detected PIPT I-cache on CPU1 Jan 17 11:58:58.896462 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 11:58:58.896470 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 17 11:58:58.896477 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:58:58.896484 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 11:58:58.896491 kernel: Detected PIPT I-cache on CPU2 Jan 17 11:58:58.896500 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 17 11:58:58.896508 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 17 11:58:58.896515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:58:58.896527 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 17 11:58:58.896537 kernel: Detected PIPT I-cache on CPU3 Jan 17 11:58:58.896547 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 17 11:58:58.896556 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 17 11:58:58.896566 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:58:58.896573 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 17 11:58:58.896580 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 11:58:58.896589 kernel: SMP: Total of 4 processors activated. Jan 17 11:58:58.896596 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 11:58:58.896603 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 11:58:58.896610 kernel: CPU features: detected: Common not Private translations Jan 17 11:58:58.896618 kernel: CPU features: detected: CRC32 instructions Jan 17 11:58:58.896625 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 11:58:58.896632 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 11:58:58.896639 kernel: CPU features: detected: LSE atomic instructions Jan 17 11:58:58.896647 kernel: CPU features: detected: Privileged Access Never Jan 17 11:58:58.896655 kernel: CPU features: detected: RAS Extension Support Jan 17 11:58:58.896662 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 11:58:58.896669 kernel: CPU: All CPU(s) started at EL1 Jan 17 11:58:58.896676 kernel: alternatives: applying system-wide alternatives Jan 17 11:58:58.896683 kernel: devtmpfs: initialized Jan 17 11:58:58.896690 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 11:58:58.896698 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 11:58:58.896705 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 11:58:58.896713 kernel: SMBIOS 3.0.0 present. Jan 17 11:58:58.896721 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 17 11:58:58.896732 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 11:58:58.896740 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 11:58:58.896747 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 11:58:58.896755 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 11:58:58.896762 kernel: audit: initializing netlink subsys (disabled) Jan 17 11:58:58.896769 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 17 11:58:58.896776 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 11:58:58.896785 kernel: cpuidle: using governor menu Jan 17 11:58:58.896793 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 11:58:58.896800 kernel: ASID allocator initialised with 32768 entries Jan 17 11:58:58.896807 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 11:58:58.896814 kernel: Serial: AMBA PL011 UART driver Jan 17 11:58:58.896822 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 11:58:58.896829 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 11:58:58.896836 kernel: Modules: 509040 pages in range for PLT usage Jan 17 11:58:58.896843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 11:58:58.896852 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 11:58:58.896860 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 11:58:58.896867 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 11:58:58.896874 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 11:58:58.896881 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 11:58:58.896888 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 11:58:58.896896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 11:58:58.896903 kernel: ACPI: Added _OSI(Module Device) Jan 17 11:58:58.896910 kernel: ACPI: Added _OSI(Processor Device) Jan 17 11:58:58.896918 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 11:58:58.896926 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 11:58:58.896933 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 11:58:58.896940 kernel: ACPI: Interpreter enabled Jan 17 11:58:58.896947 kernel: ACPI: Using GIC for interrupt routing Jan 17 11:58:58.896954 kernel: ACPI: MCFG table detected, 1 entries Jan 17 11:58:58.896962 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 11:58:58.896969 kernel: printk: console [ttyAMA0] enabled Jan 17 11:58:58.896976 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 11:58:58.897110 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 11:58:58.897183 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 11:58:58.897264 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 11:58:58.897329 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 11:58:58.897394 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 11:58:58.897404 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 11:58:58.897411 kernel: PCI host bridge to bus 0000:00 Jan 17 11:58:58.897486 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 11:58:58.897546 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 11:58:58.897603 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 11:58:58.897659 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 11:58:58.897752 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 11:58:58.897832 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 11:58:58.897905 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 17 11:58:58.897971 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 17 11:58:58.898035 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 11:58:58.898100 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 11:58:58.898164 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 17 11:58:58.898241 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 17 11:58:58.898301 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 11:58:58.898358 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 11:58:58.898419 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 11:58:58.898428 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 11:58:58.898436 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 11:58:58.898444 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 11:58:58.898451 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 11:58:58.898458 kernel: iommu: Default domain type: Translated Jan 17 11:58:58.898465 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 11:58:58.898472 kernel: efivars: Registered efivars operations Jan 17 11:58:58.898482 kernel: vgaarb: loaded Jan 17 11:58:58.898489 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 11:58:58.898496 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 11:58:58.898504 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 11:58:58.898520 kernel: pnp: PnP ACPI init Jan 17 11:58:58.898599 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 11:58:58.898610 kernel: pnp: PnP ACPI: found 1 devices Jan 17 11:58:58.898617 kernel: NET: Registered PF_INET protocol family Jan 17 11:58:58.898627 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 11:58:58.898634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 11:58:58.898642 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 11:58:58.898649 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 11:58:58.898656 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 11:58:58.898664 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 11:58:58.898671 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 11:58:58.898678 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 11:58:58.898685 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 11:58:58.898694 kernel: PCI: CLS 0 bytes, default 64 Jan 17 11:58:58.898701 kernel: kvm [1]: HYP mode not available Jan 17 11:58:58.898708 kernel: Initialise system trusted keyrings Jan 17 11:58:58.898716 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 11:58:58.898730 kernel: Key type asymmetric registered Jan 17 11:58:58.898739 kernel: Asymmetric key parser 'x509' registered Jan 17 11:58:58.898746 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 11:58:58.898753 kernel: io scheduler mq-deadline registered Jan 17 11:58:58.898760 kernel: io scheduler kyber registered Jan 17 11:58:58.898769 kernel: io scheduler bfq registered Jan 17 11:58:58.898777 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 11:58:58.898784 kernel: ACPI: button: Power Button [PWRB] Jan 17 11:58:58.898792 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 11:58:58.898861 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 17 11:58:58.898871 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 11:58:58.898878 kernel: thunder_xcv, ver 1.0 Jan 17 11:58:58.898885 kernel: thunder_bgx, ver 1.0 Jan 17 11:58:58.898893 kernel: nicpf, ver 1.0 Jan 17 11:58:58.898902 kernel: nicvf, ver 1.0 Jan 17 11:58:58.898973 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 11:58:58.899036 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T11:58:58 UTC (1737115138) Jan 17 11:58:58.899046 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 11:58:58.899053 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 11:58:58.899061 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 11:58:58.899068 kernel: watchdog: Hard watchdog permanently disabled Jan 17 11:58:58.899075 kernel: NET: Registered PF_INET6 protocol family Jan 17 11:58:58.899084 kernel: Segment Routing with IPv6 Jan 17 11:58:58.899092 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 11:58:58.899099 kernel: NET: Registered PF_PACKET protocol family Jan 17 11:58:58.899106 kernel: Key type dns_resolver registered Jan 17 11:58:58.899113 kernel: registered taskstats version 1 Jan 17 11:58:58.899120 kernel: Loading compiled-in X.509 certificates Jan 17 11:58:58.899128 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 11:58:58.899135 kernel: Key type .fscrypt registered Jan 17 11:58:58.899142 kernel: Key type fscrypt-provisioning registered Jan 17 11:58:58.899151 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 11:58:58.899158 kernel: ima: Allocated hash algorithm: sha1 Jan 17 11:58:58.899165 kernel: ima: No architecture policies found Jan 17 11:58:58.899172 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 11:58:58.899180 kernel: clk: Disabling unused clocks Jan 17 11:58:58.899196 kernel: Freeing unused kernel memory: 39360K Jan 17 11:58:58.899204 kernel: Run /init as init process Jan 17 11:58:58.899211 kernel: with arguments: Jan 17 11:58:58.899218 kernel: /init Jan 17 11:58:58.899227 kernel: with environment: Jan 17 11:58:58.899234 kernel: HOME=/ Jan 17 11:58:58.899241 kernel: TERM=linux Jan 17 11:58:58.899248 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 11:58:58.899257 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 11:58:58.899266 systemd[1]: Detected virtualization kvm. Jan 17 11:58:58.899274 systemd[1]: Detected architecture arm64. Jan 17 11:58:58.899281 systemd[1]: Running in initrd. Jan 17 11:58:58.899290 systemd[1]: No hostname configured, using default hostname. Jan 17 11:58:58.899298 systemd[1]: Hostname set to . Jan 17 11:58:58.899306 systemd[1]: Initializing machine ID from VM UUID. Jan 17 11:58:58.899313 systemd[1]: Queued start job for default target initrd.target. Jan 17 11:58:58.899321 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:58:58.899329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:58:58.899337 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 11:58:58.899345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 11:58:58.899354 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 11:58:58.899362 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 11:58:58.899371 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 11:58:58.899379 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 11:58:58.899387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:58:58.899395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:58:58.899404 systemd[1]: Reached target paths.target - Path Units. Jan 17 11:58:58.899412 systemd[1]: Reached target slices.target - Slice Units. Jan 17 11:58:58.899419 systemd[1]: Reached target swap.target - Swaps. Jan 17 11:58:58.899427 systemd[1]: Reached target timers.target - Timer Units. Jan 17 11:58:58.899435 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 11:58:58.899442 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 11:58:58.899450 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 11:58:58.899458 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 11:58:58.899466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:58:58.899475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 11:58:58.899483 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:58:58.899490 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 11:58:58.899498 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 11:58:58.899506 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 11:58:58.899514 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 11:58:58.899521 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 11:58:58.899529 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 11:58:58.899537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 11:58:58.899546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:58:58.899554 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 11:58:58.899561 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:58:58.899569 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 11:58:58.899593 systemd-journald[237]: Collecting audit messages is disabled. Jan 17 11:58:58.899614 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 11:58:58.899622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:58:58.899630 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 11:58:58.899639 systemd-journald[237]: Journal started Jan 17 11:58:58.899658 systemd-journald[237]: Runtime Journal (/run/log/journal/c795f11485c44b7facc93e8d794228ee) is 5.9M, max 47.3M, 41.4M free. Jan 17 11:58:58.884036 systemd-modules-load[239]: Inserted module 'overlay' Jan 17 11:58:58.902267 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 11:58:58.902292 kernel: Bridge firewalling registered Jan 17 11:58:58.902289 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 17 11:58:58.902618 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 11:58:58.903938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 11:58:58.922329 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:58:58.923735 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 11:58:58.925259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 11:58:58.927109 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 11:58:58.935557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:58:58.937396 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:58:58.940899 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:58:58.942011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:58:58.956371 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 11:58:58.958208 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 11:58:58.968995 dracut-cmdline[276]: dracut-dracut-053 Jan 17 11:58:58.971456 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 11:58:58.987218 systemd-resolved[277]: Positive Trust Anchors: Jan 17 11:58:58.987232 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 11:58:58.987264 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 11:58:58.991962 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 17 11:58:58.992890 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 11:58:58.994090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:58:59.036216 kernel: SCSI subsystem initialized Jan 17 11:58:59.041202 kernel: Loading iSCSI transport class v2.0-870. Jan 17 11:58:59.048210 kernel: iscsi: registered transport (tcp) Jan 17 11:58:59.062488 kernel: iscsi: registered transport (qla4xxx) Jan 17 11:58:59.062547 kernel: QLogic iSCSI HBA Driver Jan 17 11:58:59.103652 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 11:58:59.115305 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 11:58:59.131206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 11:58:59.131243 kernel: device-mapper: uevent: version 1.0.3 Jan 17 11:58:59.132372 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 11:58:59.177213 kernel: raid6: neonx8 gen() 15720 MB/s Jan 17 11:58:59.194195 kernel: raid6: neonx4 gen() 15574 MB/s Jan 17 11:58:59.211207 kernel: raid6: neonx2 gen() 13255 MB/s Jan 17 11:58:59.228197 kernel: raid6: neonx1 gen() 10486 MB/s Jan 17 11:58:59.245197 kernel: raid6: int64x8 gen() 6959 MB/s Jan 17 11:58:59.262195 kernel: raid6: int64x4 gen() 7331 MB/s Jan 17 11:58:59.279198 kernel: raid6: int64x2 gen() 6127 MB/s Jan 17 11:58:59.296215 kernel: raid6: int64x1 gen() 5055 MB/s Jan 17 11:58:59.296274 kernel: raid6: using algorithm neonx8 gen() 15720 MB/s Jan 17 11:58:59.313201 kernel: raid6: .... xor() 11921 MB/s, rmw enabled Jan 17 11:58:59.313220 kernel: raid6: using neon recovery algorithm Jan 17 11:58:59.318199 kernel: xor: measuring software checksum speed Jan 17 11:58:59.318219 kernel: 8regs : 19200 MB/sec Jan 17 11:58:59.319592 kernel: 32regs : 18594 MB/sec Jan 17 11:58:59.319607 kernel: arm64_neon : 26297 MB/sec Jan 17 11:58:59.319616 kernel: xor: using function: arm64_neon (26297 MB/sec) Jan 17 11:58:59.373211 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 11:58:59.383883 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 11:58:59.396402 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:58:59.407415 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 17 11:58:59.410567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:58:59.417340 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 11:58:59.429827 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 17 11:58:59.457351 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 11:58:59.464336 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 11:58:59.502705 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:58:59.508320 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 11:58:59.522239 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 11:58:59.523682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 11:58:59.526212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:58:59.527793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 11:58:59.536357 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 11:58:59.543524 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 17 11:58:59.550728 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 11:58:59.550822 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 11:58:59.550834 kernel: GPT:9289727 != 19775487 Jan 17 11:58:59.550843 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 11:58:59.550853 kernel: GPT:9289727 != 19775487 Jan 17 11:58:59.550861 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 11:58:59.550871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:58:59.546892 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 11:58:59.551679 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 11:58:59.551791 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:58:59.554013 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:58:59.554819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 11:58:59.554950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:58:59.556756 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:58:59.565959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:58:59.569854 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Jan 17 11:58:59.569883 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (511) Jan 17 11:58:59.577829 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 11:58:59.579004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:58:59.589300 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 11:58:59.593502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 11:58:59.597061 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 11:58:59.597983 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 11:58:59.611334 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 11:58:59.612856 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:58:59.617395 disk-uuid[550]: Primary Header is updated. Jan 17 11:58:59.617395 disk-uuid[550]: Secondary Entries is updated. Jan 17 11:58:59.617395 disk-uuid[550]: Secondary Header is updated. Jan 17 11:58:59.622233 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:58:59.633956 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:59:00.634211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:59:00.634738 disk-uuid[551]: The operation has completed successfully. Jan 17 11:59:00.655659 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 11:59:00.655774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 11:59:00.676392 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 11:59:00.679112 sh[574]: Success Jan 17 11:59:00.696217 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 11:59:00.730606 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 11:59:00.732283 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 11:59:00.732998 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 11:59:00.742672 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 11:59:00.742714 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:59:00.742727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 11:59:00.743493 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 11:59:00.744501 kernel: BTRFS info (device dm-0): using free space tree Jan 17 11:59:00.747918 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 11:59:00.749029 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 11:59:00.758340 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 11:59:00.759738 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 11:59:00.767563 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:00.767603 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:59:00.767621 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:59:00.770204 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:59:00.777977 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 11:59:00.778763 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:00.784486 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 11:59:00.790365 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 11:59:00.862248 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 11:59:00.870349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 11:59:00.890845 systemd-networkd[763]: lo: Link UP Jan 17 11:59:00.890857 systemd-networkd[763]: lo: Gained carrier Jan 17 11:59:00.891589 systemd-networkd[763]: Enumeration completed Jan 17 11:59:00.892307 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:59:00.892310 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 11:59:00.895588 ignition[663]: Ignition 2.19.0 Jan 17 11:59:00.893177 systemd-networkd[763]: eth0: Link UP Jan 17 11:59:00.895595 ignition[663]: Stage: fetch-offline Jan 17 11:59:00.893180 systemd-networkd[763]: eth0: Gained carrier Jan 17 11:59:00.895629 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:00.893665 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:59:00.895637 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:00.894727 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 11:59:00.895874 ignition[663]: parsed url from cmdline: "" Jan 17 11:59:00.895725 systemd[1]: Reached target network.target - Network. Jan 17 11:59:00.895877 ignition[663]: no config URL provided Jan 17 11:59:00.895882 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 11:59:00.895889 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jan 17 11:59:00.895912 ignition[663]: op(1): [started] loading QEMU firmware config module Jan 17 11:59:00.895917 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 11:59:00.906546 ignition[663]: op(1): [finished] loading QEMU firmware config module Jan 17 11:59:00.915230 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 11:59:00.946976 ignition[663]: parsing config with SHA512: b9799ea11aa6dbf928d28c99c7159b2f33ae3b5190b13f06c47a9551db5ee9fc6352309004f6dca3d3730d05f7f3a59a45c21e548221ab506c54dca0e04efc61 Jan 17 11:59:00.951054 unknown[663]: fetched base config from "system" Jan 17 11:59:00.951063 unknown[663]: fetched user config from "qemu" Jan 17 11:59:00.952925 ignition[663]: fetch-offline: fetch-offline passed Jan 17 11:59:00.953053 ignition[663]: Ignition finished successfully Jan 17 11:59:00.957257 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 11:59:00.958338 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 11:59:00.967343 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 11:59:00.977976 ignition[769]: Ignition 2.19.0 Jan 17 11:59:00.977989 ignition[769]: Stage: kargs Jan 17 11:59:00.978180 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:00.978236 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:00.979147 ignition[769]: kargs: kargs passed Jan 17 11:59:00.980855 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 11:59:00.979203 ignition[769]: Ignition finished successfully Jan 17 11:59:00.983058 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 11:59:00.996756 ignition[777]: Ignition 2.19.0 Jan 17 11:59:00.996766 ignition[777]: Stage: disks Jan 17 11:59:00.996944 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:00.996954 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:00.997851 ignition[777]: disks: disks passed Jan 17 11:59:00.997896 ignition[777]: Ignition finished successfully Jan 17 11:59:01.001249 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 11:59:01.002371 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 11:59:01.003653 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 11:59:01.005308 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 11:59:01.006764 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 11:59:01.008051 systemd[1]: Reached target basic.target - Basic System. Jan 17 11:59:01.025369 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 11:59:01.036595 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 11:59:01.040546 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 11:59:01.057372 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 11:59:01.097117 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 11:59:01.098310 kernel: EXT4-fs (vda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 11:59:01.098215 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 11:59:01.112274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 11:59:01.114216 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 11:59:01.115022 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 11:59:01.115062 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 11:59:01.115085 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 11:59:01.120347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 11:59:01.122264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 11:59:01.128238 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (795) Jan 17 11:59:01.128285 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:01.128297 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:59:01.129394 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:59:01.132200 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:59:01.133570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 11:59:01.168833 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 11:59:01.173263 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jan 17 11:59:01.177135 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 11:59:01.180314 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 11:59:01.252082 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 11:59:01.259288 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 11:59:01.260669 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 11:59:01.265198 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:01.284399 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 11:59:01.290190 ignition[909]: INFO : Ignition 2.19.0 Jan 17 11:59:01.290190 ignition[909]: INFO : Stage: mount Jan 17 11:59:01.291456 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:01.291456 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:01.291456 ignition[909]: INFO : mount: mount passed Jan 17 11:59:01.291456 ignition[909]: INFO : Ignition finished successfully Jan 17 11:59:01.292628 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 11:59:01.309303 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 11:59:01.741948 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 11:59:01.751387 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 11:59:01.757739 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) Jan 17 11:59:01.757776 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:01.757787 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:59:01.758424 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:59:01.761206 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:59:01.761975 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 11:59:01.783161 ignition[939]: INFO : Ignition 2.19.0 Jan 17 11:59:01.783161 ignition[939]: INFO : Stage: files Jan 17 11:59:01.784406 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:01.784406 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:01.784406 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jan 17 11:59:01.787141 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 11:59:01.787141 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 11:59:01.789997 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 11:59:01.791030 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 11:59:01.791030 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 11:59:01.790546 unknown[939]: wrote ssh authorized keys file for user: core Jan 17 11:59:01.793873 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 11:59:01.793873 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 11:59:01.841722 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 11:59:01.966888 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 11:59:01.969232 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 11:59:01.969232 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 11:59:01.969232 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 11:59:01.969232 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 11:59:01.969232 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 11:59:01.969232 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 11:59:01.969232 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 11:59:01.969232 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 11:59:01.979711 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 11:59:01.979711 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 11:59:01.979711 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 11:59:01.979711 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 11:59:01.979711 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 11:59:01.979711 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 17 11:59:02.126438 systemd-networkd[763]: eth0: Gained IPv6LL Jan 17 11:59:02.318223 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 11:59:02.948654 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 11:59:02.948654 ignition[939]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 11:59:02.951672 ignition[939]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 11:59:02.951672 ignition[939]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 11:59:02.951672 ignition[939]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 11:59:02.951672 ignition[939]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 11:59:02.951672 ignition[939]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 11:59:02.951672 ignition[939]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 11:59:02.951672 ignition[939]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 11:59:02.951672 ignition[939]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 11:59:02.981347 ignition[939]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 11:59:03.007368 ignition[939]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 11:59:03.009627 ignition[939]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 11:59:03.009627 ignition[939]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 11:59:03.009627 ignition[939]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 11:59:03.009627 ignition[939]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 11:59:03.009627 ignition[939]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 11:59:03.009627 ignition[939]: INFO : files: files passed Jan 17 11:59:03.009627 ignition[939]: INFO : Ignition finished successfully Jan 17 11:59:03.011142 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 11:59:03.020342 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 11:59:03.022333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 11:59:03.024324 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 11:59:03.024399 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 11:59:03.029515 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 11:59:03.032481 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:59:03.032481 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:59:03.034810 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:59:03.036775 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 11:59:03.037835 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 11:59:03.044386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 11:59:03.061335 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 11:59:03.062077 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 11:59:03.063145 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 11:59:03.063919 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 11:59:03.065442 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 11:59:03.066120 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 11:59:03.080111 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 11:59:03.081980 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 11:59:03.092871 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:59:03.093796 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:59:03.095227 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 11:59:03.096581 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 11:59:03.096700 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 11:59:03.098511 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 11:59:03.099958 systemd[1]: Stopped target basic.target - Basic System. Jan 17 11:59:03.101121 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 11:59:03.102410 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 11:59:03.103792 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 11:59:03.105248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 11:59:03.106587 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 11:59:03.108012 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 11:59:03.109434 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 11:59:03.110672 systemd[1]: Stopped target swap.target - Swaps. Jan 17 11:59:03.111783 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 11:59:03.111884 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 11:59:03.113590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:59:03.114983 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:59:03.116346 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 11:59:03.117719 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:59:03.118609 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 11:59:03.118719 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 11:59:03.120714 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 11:59:03.120819 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 11:59:03.122169 systemd[1]: Stopped target paths.target - Path Units. Jan 17 11:59:03.123304 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 11:59:03.124569 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:59:03.125476 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 11:59:03.126712 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 11:59:03.128423 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 11:59:03.128500 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 11:59:03.129579 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 11:59:03.129653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 11:59:03.130793 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 11:59:03.130890 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 11:59:03.132115 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 11:59:03.132212 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 11:59:03.145341 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 11:59:03.146617 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 11:59:03.147227 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 11:59:03.147331 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:59:03.148748 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 11:59:03.148843 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 11:59:03.152874 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 11:59:03.152960 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 11:59:03.157759 ignition[996]: INFO : Ignition 2.19.0 Jan 17 11:59:03.157759 ignition[996]: INFO : Stage: umount Jan 17 11:59:03.159076 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:03.159076 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:03.159076 ignition[996]: INFO : umount: umount passed Jan 17 11:59:03.159076 ignition[996]: INFO : Ignition finished successfully Jan 17 11:59:03.160207 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 11:59:03.160360 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 11:59:03.162310 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 11:59:03.162692 systemd[1]: Stopped target network.target - Network. Jan 17 11:59:03.166522 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 11:59:03.166574 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 11:59:03.167872 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 11:59:03.167908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 11:59:03.169259 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 11:59:03.169296 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 11:59:03.170536 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 11:59:03.170573 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 11:59:03.171974 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 11:59:03.173150 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 11:59:03.183241 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 17 11:59:03.185077 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 11:59:03.185229 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 11:59:03.186882 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 11:59:03.186970 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 11:59:03.189111 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 11:59:03.189233 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:59:03.208670 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 11:59:03.209942 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 11:59:03.210003 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 11:59:03.211456 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 11:59:03.211495 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:59:03.212967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 11:59:03.213006 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 11:59:03.214753 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 11:59:03.214794 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:59:03.215760 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:59:03.219228 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 11:59:03.219307 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 11:59:03.221632 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 11:59:03.221720 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 11:59:03.226780 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 11:59:03.226862 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 11:59:03.228634 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 11:59:03.228764 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:59:03.230512 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 11:59:03.230563 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 11:59:03.231475 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 11:59:03.231506 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:59:03.232907 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 11:59:03.232944 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 11:59:03.235111 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 11:59:03.235151 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 11:59:03.237172 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 11:59:03.237238 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:59:03.248439 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 11:59:03.249238 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 11:59:03.249296 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:59:03.250824 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 11:59:03.250863 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 11:59:03.252271 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 11:59:03.252307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:59:03.253926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 11:59:03.253965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:59:03.255699 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 11:59:03.255783 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 11:59:03.259615 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 11:59:03.261390 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 11:59:03.271475 systemd[1]: Switching root. Jan 17 11:59:03.295242 systemd-journald[237]: Journal stopped Jan 17 11:59:03.993310 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 17 11:59:03.993369 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 11:59:03.993383 kernel: SELinux: policy capability open_perms=1 Jan 17 11:59:03.993397 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 11:59:03.993413 kernel: SELinux: policy capability always_check_network=0 Jan 17 11:59:03.993424 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 11:59:03.993433 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 11:59:03.993443 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 11:59:03.993455 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 11:59:03.993465 kernel: audit: type=1403 audit(1737115143.432:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 11:59:03.993476 systemd[1]: Successfully loaded SELinux policy in 30.415ms. Jan 17 11:59:03.993489 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.527ms. Jan 17 11:59:03.993501 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 11:59:03.993511 systemd[1]: Detected virtualization kvm. Jan 17 11:59:03.993522 systemd[1]: Detected architecture arm64. Jan 17 11:59:03.993532 systemd[1]: Detected first boot. Jan 17 11:59:03.993543 systemd[1]: Initializing machine ID from VM UUID. Jan 17 11:59:03.993555 zram_generator::config[1041]: No configuration found. Jan 17 11:59:03.993567 systemd[1]: Populated /etc with preset unit settings. Jan 17 11:59:03.993577 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 11:59:03.993588 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 11:59:03.993598 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 11:59:03.993609 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 11:59:03.993622 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 11:59:03.993632 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 11:59:03.993644 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 11:59:03.993658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 11:59:03.993674 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 11:59:03.993688 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 11:59:03.993698 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 11:59:03.993708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:59:03.993719 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:59:03.993730 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 11:59:03.993741 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 11:59:03.993753 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 11:59:03.993781 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 11:59:03.993791 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 11:59:03.993801 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:59:03.993812 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 11:59:03.993822 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 11:59:03.993833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 11:59:03.993846 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 11:59:03.993856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:59:03.993867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 11:59:03.993879 systemd[1]: Reached target slices.target - Slice Units. Jan 17 11:59:03.993890 systemd[1]: Reached target swap.target - Swaps. Jan 17 11:59:03.993900 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 11:59:03.993911 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 11:59:03.993922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:59:03.993933 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 11:59:03.993943 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:59:03.993955 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 11:59:03.993966 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 11:59:03.993976 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 11:59:03.993987 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 11:59:03.993997 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 11:59:03.994008 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 11:59:03.994018 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 11:59:03.994029 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 11:59:03.994041 systemd[1]: Reached target machines.target - Containers. Jan 17 11:59:03.994052 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 11:59:03.994063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:59:03.994073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 11:59:03.994084 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 11:59:03.994094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:59:03.994105 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 11:59:03.994115 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:59:03.994126 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 11:59:03.994138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:59:03.994149 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 11:59:03.994160 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 11:59:03.994170 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 11:59:03.994181 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 11:59:03.994200 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 11:59:03.994213 kernel: fuse: init (API version 7.39) Jan 17 11:59:03.994223 kernel: loop: module loaded Jan 17 11:59:03.994234 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 11:59:03.994246 kernel: ACPI: bus type drm_connector registered Jan 17 11:59:03.994256 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 11:59:03.994267 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 11:59:03.994278 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 11:59:03.994289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 11:59:03.994299 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 11:59:03.994310 systemd[1]: Stopped verity-setup.service. Jan 17 11:59:03.994320 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 11:59:03.994331 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 11:59:03.994360 systemd-journald[1109]: Collecting audit messages is disabled. Jan 17 11:59:03.994382 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 11:59:03.994393 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 11:59:03.994403 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 11:59:03.994414 systemd-journald[1109]: Journal started Jan 17 11:59:03.994436 systemd-journald[1109]: Runtime Journal (/run/log/journal/c795f11485c44b7facc93e8d794228ee) is 5.9M, max 47.3M, 41.4M free. Jan 17 11:59:03.804730 systemd[1]: Queued start job for default target multi-user.target. Jan 17 11:59:03.823648 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 11:59:03.824106 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 11:59:03.996576 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 11:59:03.997148 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 11:59:03.999223 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 11:59:04.000288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:59:04.001428 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 11:59:04.001555 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 11:59:04.002690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:59:04.002828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:59:04.003885 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 11:59:04.004006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 11:59:04.005050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:59:04.005179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:59:04.006282 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 11:59:04.006406 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 11:59:04.007509 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:59:04.007651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:59:04.008701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 11:59:04.009883 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 11:59:04.011021 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 11:59:04.022270 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 11:59:04.028342 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 11:59:04.030109 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 11:59:04.030968 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 11:59:04.031003 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 11:59:04.032614 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 11:59:04.034457 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 11:59:04.036355 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 11:59:04.037230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:59:04.039374 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 11:59:04.041056 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 11:59:04.042035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 11:59:04.045387 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 11:59:04.046814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 11:59:04.049417 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 11:59:04.054793 systemd-journald[1109]: Time spent on flushing to /var/log/journal/c795f11485c44b7facc93e8d794228ee is 19.565ms for 854 entries. Jan 17 11:59:04.054793 systemd-journald[1109]: System Journal (/var/log/journal/c795f11485c44b7facc93e8d794228ee) is 8.0M, max 195.6M, 187.6M free. Jan 17 11:59:04.087636 systemd-journald[1109]: Received client request to flush runtime journal. Jan 17 11:59:04.087687 kernel: loop0: detected capacity change from 0 to 114328 Jan 17 11:59:04.087702 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 11:59:04.053423 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 11:59:04.061350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 11:59:04.066933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:59:04.069008 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 11:59:04.070198 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 11:59:04.072053 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 11:59:04.073573 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 11:59:04.077123 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 11:59:04.092765 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 11:59:04.098816 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 11:59:04.103311 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 11:59:04.107010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:59:04.111746 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 11:59:04.113244 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 17 11:59:04.115353 kernel: loop1: detected capacity change from 0 to 189592 Jan 17 11:59:04.113259 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 17 11:59:04.118783 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 11:59:04.131414 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 11:59:04.132738 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 11:59:04.135030 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 11:59:04.156109 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 11:59:04.168350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 11:59:04.169400 kernel: loop2: detected capacity change from 0 to 114432 Jan 17 11:59:04.182414 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 17 11:59:04.182431 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 17 11:59:04.187103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:59:04.204227 kernel: loop3: detected capacity change from 0 to 114328 Jan 17 11:59:04.211215 kernel: loop4: detected capacity change from 0 to 189592 Jan 17 11:59:04.219206 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 11:59:04.222463 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 11:59:04.222909 (sd-merge)[1182]: Merged extensions into '/usr'. Jan 17 11:59:04.226602 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 11:59:04.226615 systemd[1]: Reloading... Jan 17 11:59:04.267362 zram_generator::config[1205]: No configuration found. Jan 17 11:59:04.379010 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 11:59:04.382058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:59:04.418639 systemd[1]: Reloading finished in 191 ms. Jan 17 11:59:04.451547 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 11:59:04.452848 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 11:59:04.471424 systemd[1]: Starting ensure-sysext.service... Jan 17 11:59:04.473173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 11:59:04.483518 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jan 17 11:59:04.483535 systemd[1]: Reloading... Jan 17 11:59:04.493686 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 11:59:04.494264 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 11:59:04.495019 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 11:59:04.495353 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 17 11:59:04.495469 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 17 11:59:04.497657 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 11:59:04.498331 systemd-tmpfiles[1244]: Skipping /boot Jan 17 11:59:04.507145 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 11:59:04.507259 systemd-tmpfiles[1244]: Skipping /boot Jan 17 11:59:04.534304 zram_generator::config[1277]: No configuration found. Jan 17 11:59:04.609603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:59:04.645236 systemd[1]: Reloading finished in 161 ms. Jan 17 11:59:04.658592 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 11:59:04.672649 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:59:04.678561 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 11:59:04.682404 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 11:59:04.684954 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 11:59:04.690523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 11:59:04.695525 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:59:04.699007 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 11:59:04.705837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:59:04.707752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:59:04.711386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:59:04.714213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:59:04.715085 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:59:04.719459 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 11:59:04.720927 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 11:59:04.725980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:59:04.726114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:59:04.727460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:59:04.727590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:59:04.729144 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:59:04.729276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:59:04.736763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:59:04.751523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:59:04.753912 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Jan 17 11:59:04.754345 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:59:04.760617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:59:04.761748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:59:04.763447 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 11:59:04.764410 augenrules[1337]: No rules Jan 17 11:59:04.766306 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 11:59:04.767722 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 11:59:04.771213 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 11:59:04.775932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:59:04.776146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:59:04.777445 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:59:04.779035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:59:04.779250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:59:04.780717 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:59:04.780848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:59:04.783517 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 11:59:04.785001 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 11:59:04.794705 systemd[1]: Finished ensure-sysext.service. Jan 17 11:59:04.800756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:59:04.813356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:59:04.817392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 11:59:04.821439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:59:04.825054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:59:04.825980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:59:04.833871 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1353) Jan 17 11:59:04.832082 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 11:59:04.836020 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 11:59:04.837695 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 11:59:04.838131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:59:04.839690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:59:04.840872 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:59:04.840994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:59:04.846564 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 11:59:04.848433 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 11:59:04.848593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 11:59:04.849871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:59:04.850346 systemd-resolved[1312]: Positive Trust Anchors: Jan 17 11:59:04.850819 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 11:59:04.850854 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 11:59:04.860269 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:59:04.863926 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 11:59:04.863992 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 11:59:04.867882 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jan 17 11:59:04.877279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 11:59:04.878180 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:59:04.908938 systemd-networkd[1382]: lo: Link UP Jan 17 11:59:04.909028 systemd-networkd[1382]: lo: Gained carrier Jan 17 11:59:04.909889 systemd-networkd[1382]: Enumeration completed Jan 17 11:59:04.909983 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 11:59:04.910928 systemd[1]: Reached target network.target - Network. Jan 17 11:59:04.913718 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:59:04.913723 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 11:59:04.914391 systemd-networkd[1382]: eth0: Link UP Jan 17 11:59:04.914394 systemd-networkd[1382]: eth0: Gained carrier Jan 17 11:59:04.914407 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:59:04.918348 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 11:59:04.919548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 11:59:04.924007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 11:59:04.925264 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 11:59:04.925545 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 11:59:04.925914 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jan 17 11:59:04.927166 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 11:59:04.927301 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2025-01-17 11:59:05.302940 UTC. Jan 17 11:59:04.947369 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 11:59:04.949507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:59:04.950871 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 11:59:04.956232 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 11:59:04.958553 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 11:59:04.973351 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 11:59:05.001614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:59:05.005748 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 11:59:05.006924 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:59:05.009492 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 11:59:05.010416 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 11:59:05.011341 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 11:59:05.012445 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 11:59:05.013364 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 11:59:05.014315 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 11:59:05.015210 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 11:59:05.015255 systemd[1]: Reached target paths.target - Path Units. Jan 17 11:59:05.015921 systemd[1]: Reached target timers.target - Timer Units. Jan 17 11:59:05.017335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 11:59:05.019560 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 11:59:05.027250 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 11:59:05.029295 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 11:59:05.030597 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 11:59:05.031509 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 11:59:05.032222 systemd[1]: Reached target basic.target - Basic System. Jan 17 11:59:05.032964 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 11:59:05.032992 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 11:59:05.034001 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 11:59:05.035879 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 11:59:05.038382 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 11:59:05.039402 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 11:59:05.043465 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 11:59:05.044661 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 11:59:05.045728 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 11:59:05.048903 jq[1415]: false Jan 17 11:59:05.049449 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 11:59:05.054447 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 11:59:05.057398 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 11:59:05.063440 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 11:59:05.063772 extend-filesystems[1416]: Found loop3 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found loop4 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found loop5 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found vda Jan 17 11:59:05.065662 extend-filesystems[1416]: Found vda1 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found vda2 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found vda3 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found usr Jan 17 11:59:05.065662 extend-filesystems[1416]: Found vda4 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found vda6 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found vda7 Jan 17 11:59:05.065662 extend-filesystems[1416]: Found vda9 Jan 17 11:59:05.065662 extend-filesystems[1416]: Checking size of /dev/vda9 Jan 17 11:59:05.065325 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 11:59:05.064682 dbus-daemon[1414]: [system] SELinux support is enabled Jan 17 11:59:05.065751 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 11:59:05.091164 extend-filesystems[1416]: Resized partition /dev/vda9 Jan 17 11:59:05.096654 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1373) Jan 17 11:59:05.096680 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 11:59:05.066506 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 11:59:05.096885 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Jan 17 11:59:05.105687 update_engine[1427]: I20250117 11:59:05.104096 1427 main.cc:92] Flatcar Update Engine starting Jan 17 11:59:05.071670 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 11:59:05.107305 jq[1431]: true Jan 17 11:59:05.073506 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 11:59:05.108432 update_engine[1427]: I20250117 11:59:05.107996 1427 update_check_scheduler.cc:74] Next update check in 9m15s Jan 17 11:59:05.091062 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 11:59:05.103405 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 11:59:05.103596 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 11:59:05.103960 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 11:59:05.104108 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 11:59:05.108764 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 11:59:05.108956 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 11:59:05.124314 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 11:59:05.123932 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 11:59:05.158597 jq[1441]: true Jan 17 11:59:05.140858 systemd[1]: Started update-engine.service - Update Engine. Jan 17 11:59:05.158898 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 11:59:05.158898 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 11:59:05.158898 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 11:59:05.142898 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 11:59:05.166059 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Jan 17 11:59:05.166944 tar[1440]: linux-arm64/helm Jan 17 11:59:05.142924 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 11:59:05.144344 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 11:59:05.144366 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 11:59:05.156803 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 11:59:05.158244 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 11:59:05.159934 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 11:59:05.168365 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 11:59:05.172517 systemd-logind[1424]: New seat seat0. Jan 17 11:59:05.176024 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 11:59:05.197678 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Jan 17 11:59:05.195061 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 11:59:05.197453 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 11:59:05.216120 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 11:59:05.343893 containerd[1442]: time="2025-01-17T11:59:05.343801456Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 11:59:05.368912 containerd[1442]: time="2025-01-17T11:59:05.368793593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:05.370391 containerd[1442]: time="2025-01-17T11:59:05.370342540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:05.370391 containerd[1442]: time="2025-01-17T11:59:05.370380734Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 11:59:05.370391 containerd[1442]: time="2025-01-17T11:59:05.370397444Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 11:59:05.370595 containerd[1442]: time="2025-01-17T11:59:05.370574761Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 11:59:05.370646 containerd[1442]: time="2025-01-17T11:59:05.370600558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:05.370888 containerd[1442]: time="2025-01-17T11:59:05.370851709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:05.370888 containerd[1442]: time="2025-01-17T11:59:05.370886175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:05.371855 containerd[1442]: time="2025-01-17T11:59:05.371170954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:05.371855 containerd[1442]: time="2025-01-17T11:59:05.371233983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:05.371855 containerd[1442]: time="2025-01-17T11:59:05.371260325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:05.371855 containerd[1442]: time="2025-01-17T11:59:05.371273810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:05.371855 containerd[1442]: time="2025-01-17T11:59:05.371396349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:05.371855 containerd[1442]: time="2025-01-17T11:59:05.371628444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:05.371855 containerd[1442]: time="2025-01-17T11:59:05.371761494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:05.371855 containerd[1442]: time="2025-01-17T11:59:05.371779628Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 11:59:05.372155 containerd[1442]: time="2025-01-17T11:59:05.372024580Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 11:59:05.372155 containerd[1442]: time="2025-01-17T11:59:05.372120902Z" level=info msg="metadata content store policy set" policy=shared Jan 17 11:59:05.379166 containerd[1442]: time="2025-01-17T11:59:05.379132668Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 11:59:05.379375 containerd[1442]: time="2025-01-17T11:59:05.379352576Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 11:59:05.379448 containerd[1442]: time="2025-01-17T11:59:05.379433864Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 11:59:05.379518 containerd[1442]: time="2025-01-17T11:59:05.379504305Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 11:59:05.379634 containerd[1442]: time="2025-01-17T11:59:05.379618300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 11:59:05.379930 containerd[1442]: time="2025-01-17T11:59:05.379897175Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 11:59:05.380452 containerd[1442]: time="2025-01-17T11:59:05.380421252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 11:59:05.380591 containerd[1442]: time="2025-01-17T11:59:05.380571473Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 11:59:05.380623 containerd[1442]: time="2025-01-17T11:59:05.380595931Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 11:59:05.380623 containerd[1442]: time="2025-01-17T11:59:05.380611803Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 11:59:05.380659 containerd[1442]: time="2025-01-17T11:59:05.380626963Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 11:59:05.380659 containerd[1442]: time="2025-01-17T11:59:05.380640658Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 11:59:05.380659 containerd[1442]: time="2025-01-17T11:59:05.380653054Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 11:59:05.380659 containerd[1442]: time="2025-01-17T11:59:05.380668884Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 11:59:05.380659 containerd[1442]: time="2025-01-17T11:59:05.380684296Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 11:59:05.380659 containerd[1442]: time="2025-01-17T11:59:05.380698996Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380711476Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380723788Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380761479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380777980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380790460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380802940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380816634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380829450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380841050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380853404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380866178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380880919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380894614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.380906 containerd[1442]: time="2025-01-17T11:59:05.380906717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.380918652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.380934315Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.380955004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.380966814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.380977828Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.381093331Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.381113977Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.381126039Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.381137346Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.381148025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.381165154Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.381175079Z" level=info msg="NRI interface is disabled by configuration." Jan 17 11:59:05.381287 containerd[1442]: time="2025-01-17T11:59:05.381186764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 11:59:05.381612 containerd[1442]: time="2025-01-17T11:59:05.381548266Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 11:59:05.381612 containerd[1442]: time="2025-01-17T11:59:05.381613891Z" level=info msg="Connect containerd service" Jan 17 11:59:05.381889 containerd[1442]: time="2025-01-17T11:59:05.381640735Z" level=info msg="using legacy CRI server" Jan 17 11:59:05.381889 containerd[1442]: time="2025-01-17T11:59:05.381648106Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 11:59:05.381889 containerd[1442]: time="2025-01-17T11:59:05.381749538Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 11:59:05.382447 containerd[1442]: time="2025-01-17T11:59:05.382414413Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 11:59:05.382909 containerd[1442]: time="2025-01-17T11:59:05.382887105Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 11:59:05.382996 containerd[1442]: time="2025-01-17T11:59:05.382949337Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 11:59:05.383086 containerd[1442]: time="2025-01-17T11:59:05.383062663Z" level=info msg="Start subscribing containerd event" Jan 17 11:59:05.383115 containerd[1442]: time="2025-01-17T11:59:05.383096752Z" level=info msg="Start recovering state" Jan 17 11:59:05.383198 containerd[1442]: time="2025-01-17T11:59:05.383153289Z" level=info msg="Start event monitor" Jan 17 11:59:05.383198 containerd[1442]: time="2025-01-17T11:59:05.383169078Z" level=info msg="Start snapshots syncer" Jan 17 11:59:05.383198 containerd[1442]: time="2025-01-17T11:59:05.383178040Z" level=info msg="Start cni network conf syncer for default" Jan 17 11:59:05.383198 containerd[1442]: time="2025-01-17T11:59:05.383187212Z" level=info msg="Start streaming server" Jan 17 11:59:05.383361 containerd[1442]: time="2025-01-17T11:59:05.383337098Z" level=info msg="containerd successfully booted in 0.041421s" Jan 17 11:59:05.385363 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 11:59:05.510467 tar[1440]: linux-arm64/LICENSE Jan 17 11:59:05.510467 tar[1440]: linux-arm64/README.md Jan 17 11:59:05.523032 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 11:59:05.777433 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 11:59:05.799281 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 11:59:05.813390 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 11:59:05.818981 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 11:59:05.819169 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 11:59:05.821616 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 11:59:05.834300 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 11:59:05.836645 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 11:59:05.838441 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 11:59:05.839478 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 11:59:06.159505 systemd-networkd[1382]: eth0: Gained IPv6LL Jan 17 11:59:06.163336 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 11:59:06.164750 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 11:59:06.174042 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 11:59:06.176220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:06.179571 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 11:59:06.195832 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 11:59:06.196028 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 11:59:06.197501 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 11:59:06.203259 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 11:59:06.777337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:06.778644 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 11:59:06.783016 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 11:59:06.784307 systemd[1]: Startup finished in 539ms (kernel) + 4.727s (initrd) + 3.390s (userspace) = 8.657s. Jan 17 11:59:07.284162 kubelet[1528]: E0117 11:59:07.283661 1528 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 11:59:07.286886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 11:59:07.287048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 11:59:11.350805 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 11:59:11.351970 systemd[1]: Started sshd@0-10.0.0.32:22-10.0.0.1:45184.service - OpenSSH per-connection server daemon (10.0.0.1:45184). Jan 17 11:59:11.484487 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 45184 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:11.486939 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:11.507414 systemd-logind[1424]: New session 1 of user core. Jan 17 11:59:11.508395 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 11:59:11.515409 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 11:59:11.524178 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 11:59:11.526258 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 11:59:11.531942 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 11:59:11.607171 systemd[1546]: Queued start job for default target default.target. Jan 17 11:59:11.618037 systemd[1546]: Created slice app.slice - User Application Slice. Jan 17 11:59:11.618079 systemd[1546]: Reached target paths.target - Paths. Jan 17 11:59:11.618091 systemd[1546]: Reached target timers.target - Timers. Jan 17 11:59:11.619197 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 11:59:11.629752 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 11:59:11.629850 systemd[1546]: Reached target sockets.target - Sockets. Jan 17 11:59:11.629868 systemd[1546]: Reached target basic.target - Basic System. Jan 17 11:59:11.629900 systemd[1546]: Reached target default.target - Main User Target. Jan 17 11:59:11.629924 systemd[1546]: Startup finished in 93ms. Jan 17 11:59:11.630088 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 11:59:11.631388 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 11:59:11.700441 systemd[1]: Started sshd@1-10.0.0.32:22-10.0.0.1:45200.service - OpenSSH per-connection server daemon (10.0.0.1:45200). Jan 17 11:59:11.734976 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 45200 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:11.736137 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:11.740230 systemd-logind[1424]: New session 2 of user core. Jan 17 11:59:11.753399 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 11:59:11.804937 sshd[1557]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:11.814387 systemd[1]: sshd@1-10.0.0.32:22-10.0.0.1:45200.service: Deactivated successfully. Jan 17 11:59:11.815754 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 11:59:11.816941 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Jan 17 11:59:11.817993 systemd[1]: Started sshd@2-10.0.0.32:22-10.0.0.1:45208.service - OpenSSH per-connection server daemon (10.0.0.1:45208). Jan 17 11:59:11.818805 systemd-logind[1424]: Removed session 2. Jan 17 11:59:11.880638 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 45208 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:11.882090 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:11.885732 systemd-logind[1424]: New session 3 of user core. Jan 17 11:59:11.892394 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 11:59:11.941317 sshd[1564]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:11.953040 systemd[1]: sshd@2-10.0.0.32:22-10.0.0.1:45208.service: Deactivated successfully. Jan 17 11:59:11.954497 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 11:59:11.956427 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Jan 17 11:59:11.956831 systemd[1]: Started sshd@3-10.0.0.32:22-10.0.0.1:45220.service - OpenSSH per-connection server daemon (10.0.0.1:45220). Jan 17 11:59:11.958993 systemd-logind[1424]: Removed session 3. Jan 17 11:59:11.991975 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 45220 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:11.993364 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:11.997290 systemd-logind[1424]: New session 4 of user core. Jan 17 11:59:12.005380 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 11:59:12.058475 sshd[1571]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:12.071520 systemd[1]: sshd@3-10.0.0.32:22-10.0.0.1:45220.service: Deactivated successfully. Jan 17 11:59:12.072991 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 11:59:12.074253 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Jan 17 11:59:12.081457 systemd[1]: Started sshd@4-10.0.0.32:22-10.0.0.1:45222.service - OpenSSH per-connection server daemon (10.0.0.1:45222). Jan 17 11:59:12.082249 systemd-logind[1424]: Removed session 4. Jan 17 11:59:12.113090 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 45222 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:12.114390 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:12.117968 systemd-logind[1424]: New session 5 of user core. Jan 17 11:59:12.128378 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 11:59:12.187131 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 11:59:12.187502 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:59:12.202026 sudo[1581]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:12.204242 sshd[1578]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:12.211793 systemd[1]: sshd@4-10.0.0.32:22-10.0.0.1:45222.service: Deactivated successfully. Jan 17 11:59:12.213426 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 11:59:12.214729 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Jan 17 11:59:12.216135 systemd[1]: Started sshd@5-10.0.0.32:22-10.0.0.1:45238.service - OpenSSH per-connection server daemon (10.0.0.1:45238). Jan 17 11:59:12.216907 systemd-logind[1424]: Removed session 5. Jan 17 11:59:12.252654 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 45238 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:12.253886 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:12.257870 systemd-logind[1424]: New session 6 of user core. Jan 17 11:59:12.273358 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 11:59:12.325179 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 11:59:12.325483 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:59:12.328411 sudo[1590]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:12.333157 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 11:59:12.333472 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:59:12.351468 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 11:59:12.352594 auditctl[1593]: No rules Jan 17 11:59:12.352870 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 11:59:12.354286 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 11:59:12.356382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 11:59:12.378468 augenrules[1611]: No rules Jan 17 11:59:12.380304 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 11:59:12.381393 sudo[1589]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:12.382926 sshd[1586]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:12.393405 systemd[1]: sshd@5-10.0.0.32:22-10.0.0.1:45238.service: Deactivated successfully. Jan 17 11:59:12.394749 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 11:59:12.397710 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Jan 17 11:59:12.405506 systemd[1]: Started sshd@6-10.0.0.32:22-10.0.0.1:53412.service - OpenSSH per-connection server daemon (10.0.0.1:53412). Jan 17 11:59:12.406852 systemd-logind[1424]: Removed session 6. Jan 17 11:59:12.437095 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 53412 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:12.438284 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:12.441708 systemd-logind[1424]: New session 7 of user core. Jan 17 11:59:12.447342 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 11:59:12.498753 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 11:59:12.499040 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:59:12.811582 (dockerd)[1641]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 11:59:12.811636 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 11:59:13.094606 dockerd[1641]: time="2025-01-17T11:59:13.094476709Z" level=info msg="Starting up" Jan 17 11:59:13.216138 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2300855176-merged.mount: Deactivated successfully. Jan 17 11:59:13.235212 dockerd[1641]: time="2025-01-17T11:59:13.235141010Z" level=info msg="Loading containers: start." Jan 17 11:59:13.330236 kernel: Initializing XFRM netlink socket Jan 17 11:59:13.391184 systemd-networkd[1382]: docker0: Link UP Jan 17 11:59:13.409465 dockerd[1641]: time="2025-01-17T11:59:13.409370482Z" level=info msg="Loading containers: done." Jan 17 11:59:13.420629 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3975873765-merged.mount: Deactivated successfully. Jan 17 11:59:13.421944 dockerd[1641]: time="2025-01-17T11:59:13.421893272Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 11:59:13.422011 dockerd[1641]: time="2025-01-17T11:59:13.421996309Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 11:59:13.422139 dockerd[1641]: time="2025-01-17T11:59:13.422109914Z" level=info msg="Daemon has completed initialization" Jan 17 11:59:13.450741 dockerd[1641]: time="2025-01-17T11:59:13.450618065Z" level=info msg="API listen on /run/docker.sock" Jan 17 11:59:13.451070 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 11:59:14.157770 containerd[1442]: time="2025-01-17T11:59:14.157728826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 17 11:59:14.823767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430145551.mount: Deactivated successfully. Jan 17 11:59:15.703174 containerd[1442]: time="2025-01-17T11:59:15.703116195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:15.703626 containerd[1442]: time="2025-01-17T11:59:15.703572486Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 17 11:59:15.704571 containerd[1442]: time="2025-01-17T11:59:15.704541274Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:15.707965 containerd[1442]: time="2025-01-17T11:59:15.707894939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:15.709132 containerd[1442]: time="2025-01-17T11:59:15.709036030Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 1.551263738s" Jan 17 11:59:15.709132 containerd[1442]: time="2025-01-17T11:59:15.709076970Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 17 11:59:15.709739 containerd[1442]: time="2025-01-17T11:59:15.709716458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 17 11:59:16.974144 containerd[1442]: time="2025-01-17T11:59:16.974089436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:16.974883 containerd[1442]: time="2025-01-17T11:59:16.974835660Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 17 11:59:16.977959 containerd[1442]: time="2025-01-17T11:59:16.977894022Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:16.980918 containerd[1442]: time="2025-01-17T11:59:16.980889027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:16.982085 containerd[1442]: time="2025-01-17T11:59:16.982055062Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.272306426s" Jan 17 11:59:16.982085 containerd[1442]: time="2025-01-17T11:59:16.982086802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 17 11:59:16.982507 containerd[1442]: time="2025-01-17T11:59:16.982485952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 17 11:59:17.399799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 11:59:17.409424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:17.508889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:17.513510 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 11:59:17.547912 kubelet[1856]: E0117 11:59:17.547836 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 11:59:17.551121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 11:59:17.551298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 11:59:18.178851 containerd[1442]: time="2025-01-17T11:59:18.178793896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:18.179345 containerd[1442]: time="2025-01-17T11:59:18.179306707Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 17 11:59:18.180536 containerd[1442]: time="2025-01-17T11:59:18.180498494Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:18.183221 containerd[1442]: time="2025-01-17T11:59:18.183163419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:18.184565 containerd[1442]: time="2025-01-17T11:59:18.184535688Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.202018264s" Jan 17 11:59:18.184618 containerd[1442]: time="2025-01-17T11:59:18.184571825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 17 11:59:18.185283 containerd[1442]: time="2025-01-17T11:59:18.185219020Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 11:59:19.184061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980928862.mount: Deactivated successfully. Jan 17 11:59:19.412015 containerd[1442]: time="2025-01-17T11:59:19.411957118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:19.412772 containerd[1442]: time="2025-01-17T11:59:19.412742608Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 17 11:59:19.413380 containerd[1442]: time="2025-01-17T11:59:19.413356060Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:19.415651 containerd[1442]: time="2025-01-17T11:59:19.415619340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:19.416365 containerd[1442]: time="2025-01-17T11:59:19.416221028Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.230929503s" Jan 17 11:59:19.416365 containerd[1442]: time="2025-01-17T11:59:19.416257893Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 17 11:59:19.416796 containerd[1442]: time="2025-01-17T11:59:19.416773764Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 11:59:20.049998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount820708217.mount: Deactivated successfully. Jan 17 11:59:20.606440 containerd[1442]: time="2025-01-17T11:59:20.606373336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:20.606946 containerd[1442]: time="2025-01-17T11:59:20.606907702Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 17 11:59:20.607677 containerd[1442]: time="2025-01-17T11:59:20.607641968Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:20.610928 containerd[1442]: time="2025-01-17T11:59:20.610892605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:20.612339 containerd[1442]: time="2025-01-17T11:59:20.612266943Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.195459064s" Jan 17 11:59:20.612339 containerd[1442]: time="2025-01-17T11:59:20.612310578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 11:59:20.612903 containerd[1442]: time="2025-01-17T11:59:20.612742499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 11:59:21.055641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676144584.mount: Deactivated successfully. Jan 17 11:59:21.060161 containerd[1442]: time="2025-01-17T11:59:21.060118097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:21.061079 containerd[1442]: time="2025-01-17T11:59:21.061029645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 17 11:59:21.061924 containerd[1442]: time="2025-01-17T11:59:21.061877643Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:21.064528 containerd[1442]: time="2025-01-17T11:59:21.064479919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:21.065576 containerd[1442]: time="2025-01-17T11:59:21.065388370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 452.602977ms" Jan 17 11:59:21.065576 containerd[1442]: time="2025-01-17T11:59:21.065419783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 11:59:21.066022 containerd[1442]: time="2025-01-17T11:59:21.065933578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 17 11:59:21.701206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792186553.mount: Deactivated successfully. Jan 17 11:59:23.285933 containerd[1442]: time="2025-01-17T11:59:23.285886017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:23.286903 containerd[1442]: time="2025-01-17T11:59:23.286655310Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 17 11:59:23.287753 containerd[1442]: time="2025-01-17T11:59:23.287695630Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:23.290798 containerd[1442]: time="2025-01-17T11:59:23.290748621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:23.292090 containerd[1442]: time="2025-01-17T11:59:23.292057878Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.226091088s" Jan 17 11:59:23.292300 containerd[1442]: time="2025-01-17T11:59:23.292173487Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 17 11:59:27.649968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 11:59:27.659370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:27.747061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:27.751262 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 11:59:27.844744 kubelet[2009]: E0117 11:59:27.844688 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 11:59:27.847278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 11:59:27.847422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 11:59:28.317873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:28.327403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:28.347772 systemd[1]: Reloading requested from client PID 2024 ('systemctl') (unit session-7.scope)... Jan 17 11:59:28.347789 systemd[1]: Reloading... Jan 17 11:59:28.409374 zram_generator::config[2063]: No configuration found. Jan 17 11:59:28.524556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:59:28.576395 systemd[1]: Reloading finished in 228 ms. Jan 17 11:59:28.615891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:28.618382 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 11:59:28.618568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:28.620031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:28.711711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:28.715358 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 11:59:28.748084 kubelet[2110]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:59:28.748084 kubelet[2110]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 11:59:28.748084 kubelet[2110]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:59:28.748419 kubelet[2110]: I0117 11:59:28.748268 2110 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 11:59:29.369004 kubelet[2110]: I0117 11:59:29.368959 2110 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 11:59:29.369004 kubelet[2110]: I0117 11:59:29.368993 2110 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 11:59:29.369297 kubelet[2110]: I0117 11:59:29.369272 2110 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 11:59:29.400608 kubelet[2110]: E0117 11:59:29.400566 2110 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:29.401399 kubelet[2110]: I0117 11:59:29.401292 2110 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 11:59:29.413158 kubelet[2110]: E0117 11:59:29.413119 2110 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 11:59:29.413158 kubelet[2110]: I0117 11:59:29.413156 2110 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 11:59:29.418260 kubelet[2110]: I0117 11:59:29.418239 2110 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 11:59:29.419274 kubelet[2110]: I0117 11:59:29.419181 2110 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 11:59:29.419375 kubelet[2110]: I0117 11:59:29.419337 2110 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 11:59:29.419533 kubelet[2110]: I0117 11:59:29.419375 2110 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 11:59:29.419680 kubelet[2110]: I0117 11:59:29.419669 2110 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 11:59:29.419680 kubelet[2110]: I0117 11:59:29.419680 2110 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 11:59:29.419865 kubelet[2110]: I0117 11:59:29.419852 2110 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:59:29.421581 kubelet[2110]: I0117 11:59:29.421548 2110 kubelet.go:408] "Attempting to sync node with API server" Jan 17 11:59:29.421581 kubelet[2110]: I0117 11:59:29.421580 2110 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 11:59:29.421641 kubelet[2110]: I0117 11:59:29.421607 2110 kubelet.go:314] "Adding apiserver pod source" Jan 17 11:59:29.421641 kubelet[2110]: I0117 11:59:29.421618 2110 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 11:59:29.422838 kubelet[2110]: W0117 11:59:29.422674 2110 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 17 11:59:29.422838 kubelet[2110]: E0117 11:59:29.422742 2110 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:29.423004 kubelet[2110]: W0117 11:59:29.422944 2110 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 17 11:59:29.423036 kubelet[2110]: E0117 11:59:29.423001 2110 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:29.425523 kubelet[2110]: I0117 11:59:29.425498 2110 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 11:59:29.427315 kubelet[2110]: I0117 11:59:29.427289 2110 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 11:59:29.427966 kubelet[2110]: W0117 11:59:29.427939 2110 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 11:59:29.431149 kubelet[2110]: I0117 11:59:29.430975 2110 server.go:1269] "Started kubelet" Jan 17 11:59:29.431800 kubelet[2110]: I0117 11:59:29.431492 2110 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 11:59:29.432089 kubelet[2110]: I0117 11:59:29.432043 2110 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 11:59:29.432439 kubelet[2110]: I0117 11:59:29.432418 2110 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 11:59:29.433016 kubelet[2110]: I0117 11:59:29.432988 2110 server.go:460] "Adding debug handlers to kubelet server" Jan 17 11:59:29.435393 kubelet[2110]: I0117 11:59:29.435360 2110 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 11:59:29.437460 kubelet[2110]: I0117 11:59:29.437441 2110 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 11:59:29.437531 kubelet[2110]: I0117 11:59:29.437511 2110 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 11:59:29.439933 kubelet[2110]: I0117 11:59:29.438635 2110 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 11:59:29.439933 kubelet[2110]: I0117 11:59:29.438687 2110 reconciler.go:26] "Reconciler: start to sync state" Jan 17 11:59:29.439933 kubelet[2110]: W0117 11:59:29.439161 2110 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 17 11:59:29.439933 kubelet[2110]: E0117 11:59:29.439252 2110 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:29.440354 kubelet[2110]: E0117 11:59:29.440274 2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="200ms" Jan 17 11:59:29.440529 kubelet[2110]: E0117 11:59:29.440371 2110 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 11:59:29.440529 kubelet[2110]: E0117 11:59:29.440430 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:29.440589 kubelet[2110]: I0117 11:59:29.440530 2110 factory.go:221] Registration of the systemd container factory successfully Jan 17 11:59:29.440616 kubelet[2110]: I0117 11:59:29.440598 2110 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 11:59:29.441958 kubelet[2110]: E0117 11:59:29.439575 2110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b790affdfba2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 11:59:29.430944298 +0000 UTC m=+0.712884454,LastTimestamp:2025-01-17 11:59:29.430944298 +0000 UTC m=+0.712884454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 11:59:29.442771 kubelet[2110]: I0117 11:59:29.442747 2110 factory.go:221] Registration of the containerd container factory successfully Jan 17 11:59:29.450744 kubelet[2110]: I0117 11:59:29.450689 2110 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 11:59:29.451912 kubelet[2110]: I0117 11:59:29.451891 2110 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 11:59:29.451912 kubelet[2110]: I0117 11:59:29.451921 2110 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 11:59:29.452017 kubelet[2110]: I0117 11:59:29.451939 2110 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 11:59:29.454101 kubelet[2110]: E0117 11:59:29.453764 2110 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 11:59:29.456394 kubelet[2110]: W0117 11:59:29.456346 2110 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 17 11:59:29.456468 kubelet[2110]: E0117 11:59:29.456401 2110 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:29.456493 kubelet[2110]: I0117 11:59:29.456470 2110 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 11:59:29.456493 kubelet[2110]: I0117 11:59:29.456480 2110 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 11:59:29.456548 kubelet[2110]: I0117 11:59:29.456497 2110 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:59:29.520656 kubelet[2110]: I0117 11:59:29.520615 2110 policy_none.go:49] "None policy: Start" Jan 17 11:59:29.521357 kubelet[2110]: I0117 11:59:29.521340 2110 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 11:59:29.521431 kubelet[2110]: I0117 11:59:29.521400 2110 state_mem.go:35] "Initializing new in-memory state store" Jan 17 11:59:29.528066 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 11:59:29.541029 kubelet[2110]: E0117 11:59:29.540994 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:29.542897 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 11:59:29.545829 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 11:59:29.554563 kubelet[2110]: E0117 11:59:29.554533 2110 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 11:59:29.561213 kubelet[2110]: I0117 11:59:29.561184 2110 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 11:59:29.561431 kubelet[2110]: I0117 11:59:29.561414 2110 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 11:59:29.561476 kubelet[2110]: I0117 11:59:29.561432 2110 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 11:59:29.561892 kubelet[2110]: I0117 11:59:29.561772 2110 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 11:59:29.563151 kubelet[2110]: E0117 11:59:29.563125 2110 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 11:59:29.641381 kubelet[2110]: E0117 11:59:29.641259 2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="400ms" Jan 17 11:59:29.663566 kubelet[2110]: I0117 11:59:29.663535 2110 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 11:59:29.665677 kubelet[2110]: E0117 11:59:29.665649 2110 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Jan 17 11:59:29.765883 systemd[1]: Created slice kubepods-burstable-podbfc17f5401e3e65df01ba38a8ad3f085.slice - libcontainer container kubepods-burstable-podbfc17f5401e3e65df01ba38a8ad3f085.slice. Jan 17 11:59:29.785018 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 17 11:59:29.797921 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 17 11:59:29.840487 kubelet[2110]: I0117 11:59:29.840446 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfc17f5401e3e65df01ba38a8ad3f085-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bfc17f5401e3e65df01ba38a8ad3f085\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:29.840487 kubelet[2110]: I0117 11:59:29.840485 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:29.840885 kubelet[2110]: I0117 11:59:29.840506 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:29.840885 kubelet[2110]: I0117 11:59:29.840522 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:29.840885 kubelet[2110]: I0117 11:59:29.840540 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfc17f5401e3e65df01ba38a8ad3f085-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfc17f5401e3e65df01ba38a8ad3f085\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:29.840885 kubelet[2110]: I0117 11:59:29.840554 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfc17f5401e3e65df01ba38a8ad3f085-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfc17f5401e3e65df01ba38a8ad3f085\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:29.840885 kubelet[2110]: I0117 11:59:29.840567 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:29.840992 kubelet[2110]: I0117 11:59:29.840582 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:29.840992 kubelet[2110]: I0117 11:59:29.840598 2110 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 17 11:59:29.867670 kubelet[2110]: I0117 11:59:29.867628 2110 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 11:59:29.868009 kubelet[2110]: E0117 11:59:29.867960 2110 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Jan 17 11:59:30.041836 kubelet[2110]: E0117 11:59:30.041793 2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="800ms" Jan 17 11:59:30.090648 kubelet[2110]: E0117 11:59:30.090559 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:30.091248 containerd[1442]: time="2025-01-17T11:59:30.091210111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bfc17f5401e3e65df01ba38a8ad3f085,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:30.096832 kubelet[2110]: E0117 11:59:30.096396 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:30.097085 containerd[1442]: time="2025-01-17T11:59:30.097050395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:30.100416 kubelet[2110]: E0117 11:59:30.100393 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:30.101045 containerd[1442]: time="2025-01-17T11:59:30.100866549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:30.268993 kubelet[2110]: I0117 11:59:30.268957 2110 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 11:59:30.269288 kubelet[2110]: E0117 11:59:30.269262 2110 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Jan 17 11:59:30.409526 kubelet[2110]: W0117 11:59:30.409453 2110 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 17 11:59:30.409526 kubelet[2110]: E0117 11:59:30.409523 2110 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:30.508173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239173594.mount: Deactivated successfully. Jan 17 11:59:30.513157 containerd[1442]: time="2025-01-17T11:59:30.513106404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:30.514481 containerd[1442]: time="2025-01-17T11:59:30.514438502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 11:59:30.515153 containerd[1442]: time="2025-01-17T11:59:30.515109258Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:30.517121 containerd[1442]: time="2025-01-17T11:59:30.516479660Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:30.520020 containerd[1442]: time="2025-01-17T11:59:30.519959614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 17 11:59:30.520593 containerd[1442]: time="2025-01-17T11:59:30.520422144Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:30.520593 containerd[1442]: time="2025-01-17T11:59:30.520533370Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 11:59:30.522779 containerd[1442]: time="2025-01-17T11:59:30.522747256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:30.524570 containerd[1442]: time="2025-01-17T11:59:30.524541443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 427.316958ms" Jan 17 11:59:30.525995 containerd[1442]: time="2025-01-17T11:59:30.525856553Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 434.564987ms" Jan 17 11:59:30.528096 containerd[1442]: time="2025-01-17T11:59:30.528064629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 427.14379ms" Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661549079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661624805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661640151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661669680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661794127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661722969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661897539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661929112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661951389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.661968498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:30.662441 containerd[1442]: time="2025-01-17T11:59:30.662044744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:30.662732 containerd[1442]: time="2025-01-17T11:59:30.662156050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:30.680372 systemd[1]: Started cri-containerd-7c75cd10363a1185ea090163e10e0b8f16031096e926ea4d6bc01964e75853d9.scope - libcontainer container 7c75cd10363a1185ea090163e10e0b8f16031096e926ea4d6bc01964e75853d9. Jan 17 11:59:30.684264 systemd[1]: Started cri-containerd-b3d79b9833e593f1fbb97d37a3301e4463f80c72df8744ca34978a39670dac5c.scope - libcontainer container b3d79b9833e593f1fbb97d37a3301e4463f80c72df8744ca34978a39670dac5c. Jan 17 11:59:30.685487 systemd[1]: Started cri-containerd-f71c33bd644aec673123ee8747cdc4f182bf1f400fd231ff3ab8332cfc0d4714.scope - libcontainer container f71c33bd644aec673123ee8747cdc4f182bf1f400fd231ff3ab8332cfc0d4714. Jan 17 11:59:30.719036 containerd[1442]: time="2025-01-17T11:59:30.718992161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bfc17f5401e3e65df01ba38a8ad3f085,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3d79b9833e593f1fbb97d37a3301e4463f80c72df8744ca34978a39670dac5c\"" Jan 17 11:59:30.719954 containerd[1442]: time="2025-01-17T11:59:30.719780915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c75cd10363a1185ea090163e10e0b8f16031096e926ea4d6bc01964e75853d9\"" Jan 17 11:59:30.720350 kubelet[2110]: E0117 11:59:30.720328 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:30.721368 kubelet[2110]: E0117 11:59:30.721347 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:30.723844 containerd[1442]: time="2025-01-17T11:59:30.723806858Z" level=info msg="CreateContainer within sandbox \"b3d79b9833e593f1fbb97d37a3301e4463f80c72df8744ca34978a39670dac5c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 11:59:30.724698 containerd[1442]: time="2025-01-17T11:59:30.724672579Z" level=info msg="CreateContainer within sandbox \"7c75cd10363a1185ea090163e10e0b8f16031096e926ea4d6bc01964e75853d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 11:59:30.731611 containerd[1442]: time="2025-01-17T11:59:30.731579639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"f71c33bd644aec673123ee8747cdc4f182bf1f400fd231ff3ab8332cfc0d4714\"" Jan 17 11:59:30.732136 kubelet[2110]: E0117 11:59:30.732114 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:30.733563 containerd[1442]: time="2025-01-17T11:59:30.733509452Z" level=info msg="CreateContainer within sandbox \"f71c33bd644aec673123ee8747cdc4f182bf1f400fd231ff3ab8332cfc0d4714\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 11:59:30.737511 containerd[1442]: time="2025-01-17T11:59:30.737474735Z" level=info msg="CreateContainer within sandbox \"b3d79b9833e593f1fbb97d37a3301e4463f80c72df8744ca34978a39670dac5c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"710742fe8fc7d75eaa9348a1e5a673ebec3cc68f398d061c7a2deb38cebafd06\"" Jan 17 11:59:30.738086 containerd[1442]: time="2025-01-17T11:59:30.738062513Z" level=info msg="StartContainer for \"710742fe8fc7d75eaa9348a1e5a673ebec3cc68f398d061c7a2deb38cebafd06\"" Jan 17 11:59:30.742892 containerd[1442]: time="2025-01-17T11:59:30.742816949Z" level=info msg="CreateContainer within sandbox \"7c75cd10363a1185ea090163e10e0b8f16031096e926ea4d6bc01964e75853d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1467689bf0920307db37807eadab9696a49b9ffa66ff590be1d3b95fc383058d\"" Jan 17 11:59:30.743314 containerd[1442]: time="2025-01-17T11:59:30.743289777Z" level=info msg="StartContainer for \"1467689bf0920307db37807eadab9696a49b9ffa66ff590be1d3b95fc383058d\"" Jan 17 11:59:30.749516 containerd[1442]: time="2025-01-17T11:59:30.749477479Z" level=info msg="CreateContainer within sandbox \"f71c33bd644aec673123ee8747cdc4f182bf1f400fd231ff3ab8332cfc0d4714\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f826634d64bb314b73d40804b05fa5882ee36ce03b68b50f3f8726a122245ca4\"" Jan 17 11:59:30.750366 containerd[1442]: time="2025-01-17T11:59:30.750339514Z" level=info msg="StartContainer for \"f826634d64bb314b73d40804b05fa5882ee36ce03b68b50f3f8726a122245ca4\"" Jan 17 11:59:30.764676 systemd[1]: Started cri-containerd-710742fe8fc7d75eaa9348a1e5a673ebec3cc68f398d061c7a2deb38cebafd06.scope - libcontainer container 710742fe8fc7d75eaa9348a1e5a673ebec3cc68f398d061c7a2deb38cebafd06. Jan 17 11:59:30.768350 systemd[1]: Started cri-containerd-1467689bf0920307db37807eadab9696a49b9ffa66ff590be1d3b95fc383058d.scope - libcontainer container 1467689bf0920307db37807eadab9696a49b9ffa66ff590be1d3b95fc383058d. Jan 17 11:59:30.774239 systemd[1]: Started cri-containerd-f826634d64bb314b73d40804b05fa5882ee36ce03b68b50f3f8726a122245ca4.scope - libcontainer container f826634d64bb314b73d40804b05fa5882ee36ce03b68b50f3f8726a122245ca4. Jan 17 11:59:30.778665 kubelet[2110]: W0117 11:59:30.778603 2110 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 17 11:59:30.778760 kubelet[2110]: E0117 11:59:30.778672 2110 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:30.841439 containerd[1442]: time="2025-01-17T11:59:30.841262741Z" level=info msg="StartContainer for \"f826634d64bb314b73d40804b05fa5882ee36ce03b68b50f3f8726a122245ca4\" returns successfully" Jan 17 11:59:30.841439 containerd[1442]: time="2025-01-17T11:59:30.841354053Z" level=info msg="StartContainer for \"1467689bf0920307db37807eadab9696a49b9ffa66ff590be1d3b95fc383058d\" returns successfully" Jan 17 11:59:30.841639 containerd[1442]: time="2025-01-17T11:59:30.841281211Z" level=info msg="StartContainer for \"710742fe8fc7d75eaa9348a1e5a673ebec3cc68f398d061c7a2deb38cebafd06\" returns successfully" Jan 17 11:59:30.842254 kubelet[2110]: E0117 11:59:30.842211 2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="1.6s" Jan 17 11:59:30.902675 kubelet[2110]: W0117 11:59:30.902609 2110 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 17 11:59:30.902772 kubelet[2110]: E0117 11:59:30.902680 2110 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:30.932760 kubelet[2110]: W0117 11:59:30.931725 2110 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 17 11:59:30.932760 kubelet[2110]: E0117 11:59:30.931794 2110 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Jan 17 11:59:31.070882 kubelet[2110]: I0117 11:59:31.070840 2110 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 11:59:31.461222 kubelet[2110]: E0117 11:59:31.461135 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:31.463613 kubelet[2110]: E0117 11:59:31.463469 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:31.465473 kubelet[2110]: E0117 11:59:31.465449 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:32.458696 kubelet[2110]: E0117 11:59:32.458653 2110 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 11:59:32.467647 kubelet[2110]: E0117 11:59:32.467572 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:32.479907 kubelet[2110]: I0117 11:59:32.479719 2110 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 17 11:59:32.479907 kubelet[2110]: E0117 11:59:32.479761 2110 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 17 11:59:32.490392 kubelet[2110]: E0117 11:59:32.490359 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:32.591286 kubelet[2110]: E0117 11:59:32.591243 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:32.691451 kubelet[2110]: E0117 11:59:32.691394 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:32.792030 kubelet[2110]: E0117 11:59:32.791917 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:32.817433 kubelet[2110]: E0117 11:59:32.817401 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:32.892078 kubelet[2110]: E0117 11:59:32.892037 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:32.992974 kubelet[2110]: E0117 11:59:32.992928 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:33.093508 kubelet[2110]: E0117 11:59:33.093402 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:33.193982 kubelet[2110]: E0117 11:59:33.193935 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:33.294434 kubelet[2110]: E0117 11:59:33.294385 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:33.395419 kubelet[2110]: E0117 11:59:33.395135 2110 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:33.423156 kubelet[2110]: I0117 11:59:33.423112 2110 apiserver.go:52] "Watching apiserver" Jan 17 11:59:33.439140 kubelet[2110]: I0117 11:59:33.438725 2110 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 11:59:34.662060 systemd[1]: Reloading requested from client PID 2384 ('systemctl') (unit session-7.scope)... Jan 17 11:59:34.662075 systemd[1]: Reloading... Jan 17 11:59:34.718782 zram_generator::config[2423]: No configuration found. Jan 17 11:59:34.862114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:59:34.925731 systemd[1]: Reloading finished in 263 ms. Jan 17 11:59:34.958963 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:34.967557 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 11:59:34.967770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:34.967814 systemd[1]: kubelet.service: Consumed 1.051s CPU time, 115.8M memory peak, 0B memory swap peak. Jan 17 11:59:34.977506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:35.062041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:35.065873 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 11:59:35.103845 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:59:35.103845 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 11:59:35.103845 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:59:35.104274 kubelet[2465]: I0117 11:59:35.103986 2465 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 11:59:35.111269 kubelet[2465]: I0117 11:59:35.110069 2465 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 11:59:35.111269 kubelet[2465]: I0117 11:59:35.110094 2465 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 11:59:35.111269 kubelet[2465]: I0117 11:59:35.110308 2465 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 11:59:35.111676 kubelet[2465]: I0117 11:59:35.111643 2465 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 11:59:35.113797 kubelet[2465]: I0117 11:59:35.113701 2465 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 11:59:35.116341 kubelet[2465]: E0117 11:59:35.116317 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 11:59:35.116341 kubelet[2465]: I0117 11:59:35.116342 2465 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 11:59:35.118783 kubelet[2465]: I0117 11:59:35.118762 2465 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 11:59:35.118975 kubelet[2465]: I0117 11:59:35.118961 2465 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 11:59:35.119170 kubelet[2465]: I0117 11:59:35.119141 2465 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 11:59:35.120487 kubelet[2465]: I0117 11:59:35.119250 2465 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 11:59:35.120487 kubelet[2465]: I0117 11:59:35.119451 2465 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 11:59:35.120487 kubelet[2465]: I0117 11:59:35.119466 2465 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 11:59:35.120487 kubelet[2465]: I0117 11:59:35.119501 2465 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:59:35.120487 kubelet[2465]: I0117 11:59:35.119602 2465 kubelet.go:408] "Attempting to sync node with API server" Jan 17 11:59:35.120685 kubelet[2465]: I0117 11:59:35.119616 2465 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 11:59:35.120685 kubelet[2465]: I0117 11:59:35.119646 2465 kubelet.go:314] "Adding apiserver pod source" Jan 17 11:59:35.120685 kubelet[2465]: I0117 11:59:35.119655 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 11:59:35.127761 kubelet[2465]: I0117 11:59:35.124988 2465 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 11:59:35.127761 kubelet[2465]: I0117 11:59:35.125495 2465 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 11:59:35.127761 kubelet[2465]: I0117 11:59:35.125873 2465 server.go:1269] "Started kubelet" Jan 17 11:59:35.128117 kubelet[2465]: I0117 11:59:35.128094 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 11:59:35.128866 kubelet[2465]: I0117 11:59:35.128834 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 11:59:35.129253 kubelet[2465]: I0117 11:59:35.129228 2465 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 11:59:35.129610 kubelet[2465]: I0117 11:59:35.129569 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 11:59:35.130688 kubelet[2465]: I0117 11:59:35.130672 2465 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 11:59:35.130844 kubelet[2465]: I0117 11:59:35.130824 2465 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 11:59:35.130974 kubelet[2465]: E0117 11:59:35.130956 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:35.131380 kubelet[2465]: I0117 11:59:35.131353 2465 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 11:59:35.131565 kubelet[2465]: I0117 11:59:35.131544 2465 reconciler.go:26] "Reconciler: start to sync state" Jan 17 11:59:35.133865 kubelet[2465]: I0117 11:59:35.133837 2465 factory.go:221] Registration of the systemd container factory successfully Jan 17 11:59:35.134060 kubelet[2465]: I0117 11:59:35.134038 2465 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 11:59:35.138351 kubelet[2465]: I0117 11:59:35.138309 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 11:59:35.139113 kubelet[2465]: I0117 11:59:35.139089 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 11:59:35.139113 kubelet[2465]: I0117 11:59:35.139111 2465 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 11:59:35.139226 kubelet[2465]: I0117 11:59:35.139127 2465 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 11:59:35.139226 kubelet[2465]: E0117 11:59:35.139162 2465 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 11:59:35.141360 kubelet[2465]: I0117 11:59:35.141336 2465 server.go:460] "Adding debug handlers to kubelet server" Jan 17 11:59:35.146139 kubelet[2465]: E0117 11:59:35.146116 2465 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 11:59:35.146632 kubelet[2465]: I0117 11:59:35.146607 2465 factory.go:221] Registration of the containerd container factory successfully Jan 17 11:59:35.178958 kubelet[2465]: I0117 11:59:35.178868 2465 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 11:59:35.179154 kubelet[2465]: I0117 11:59:35.179118 2465 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 11:59:35.179252 kubelet[2465]: I0117 11:59:35.179241 2465 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:59:35.179438 kubelet[2465]: I0117 11:59:35.179421 2465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 11:59:35.179555 kubelet[2465]: I0117 11:59:35.179528 2465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 11:59:35.179605 kubelet[2465]: I0117 11:59:35.179597 2465 policy_none.go:49] "None policy: Start" Jan 17 11:59:35.180749 kubelet[2465]: I0117 11:59:35.180732 2465 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 11:59:35.180889 kubelet[2465]: I0117 11:59:35.180877 2465 state_mem.go:35] "Initializing new in-memory state store" Jan 17 11:59:35.181131 kubelet[2465]: I0117 11:59:35.181110 2465 state_mem.go:75] "Updated machine memory state" Jan 17 11:59:35.184621 kubelet[2465]: I0117 11:59:35.184599 2465 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 11:59:35.184754 kubelet[2465]: I0117 11:59:35.184739 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 11:59:35.184788 kubelet[2465]: I0117 11:59:35.184754 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 11:59:35.185336 kubelet[2465]: I0117 11:59:35.185212 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 11:59:35.288266 kubelet[2465]: I0117 11:59:35.288235 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 11:59:35.294048 kubelet[2465]: I0117 11:59:35.294022 2465 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 17 11:59:35.294123 kubelet[2465]: I0117 11:59:35.294097 2465 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 17 11:59:35.433444 kubelet[2465]: I0117 11:59:35.433332 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:35.433444 kubelet[2465]: I0117 11:59:35.433364 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:35.433585 kubelet[2465]: I0117 11:59:35.433428 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 17 11:59:35.433585 kubelet[2465]: I0117 11:59:35.433495 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfc17f5401e3e65df01ba38a8ad3f085-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfc17f5401e3e65df01ba38a8ad3f085\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:35.433585 kubelet[2465]: I0117 11:59:35.433516 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfc17f5401e3e65df01ba38a8ad3f085-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfc17f5401e3e65df01ba38a8ad3f085\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:35.433585 kubelet[2465]: I0117 11:59:35.433539 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfc17f5401e3e65df01ba38a8ad3f085-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bfc17f5401e3e65df01ba38a8ad3f085\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:35.433585 kubelet[2465]: I0117 11:59:35.433557 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:35.433719 kubelet[2465]: I0117 11:59:35.433573 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:35.433719 kubelet[2465]: I0117 11:59:35.433595 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:35.560409 kubelet[2465]: E0117 11:59:35.560337 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:35.560409 kubelet[2465]: E0117 11:59:35.560337 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:35.561491 kubelet[2465]: E0117 11:59:35.561456 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:36.120655 kubelet[2465]: I0117 11:59:36.120370 2465 apiserver.go:52] "Watching apiserver" Jan 17 11:59:36.131918 kubelet[2465]: I0117 11:59:36.131879 2465 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 11:59:36.160897 kubelet[2465]: E0117 11:59:36.160709 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:36.160897 kubelet[2465]: E0117 11:59:36.160831 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:36.165451 kubelet[2465]: E0117 11:59:36.165374 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:36.165597 kubelet[2465]: E0117 11:59:36.165529 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:36.180022 kubelet[2465]: I0117 11:59:36.179967 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.179950808 podStartE2EDuration="1.179950808s" podCreationTimestamp="2025-01-17 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:59:36.179827075 +0000 UTC m=+1.110348887" watchObservedRunningTime="2025-01-17 11:59:36.179950808 +0000 UTC m=+1.110472580" Jan 17 11:59:36.192527 kubelet[2465]: I0117 11:59:36.192456 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.19244203 podStartE2EDuration="1.19244203s" podCreationTimestamp="2025-01-17 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:59:36.186338145 +0000 UTC m=+1.116859957" watchObservedRunningTime="2025-01-17 11:59:36.19244203 +0000 UTC m=+1.122963842" Jan 17 11:59:36.192648 kubelet[2465]: I0117 11:59:36.192565 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.192561319 podStartE2EDuration="1.192561319s" podCreationTimestamp="2025-01-17 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:59:36.192523211 +0000 UTC m=+1.123045023" watchObservedRunningTime="2025-01-17 11:59:36.192561319 +0000 UTC m=+1.123083131" Jan 17 11:59:37.162465 kubelet[2465]: E0117 11:59:37.162427 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:37.162967 kubelet[2465]: E0117 11:59:37.162937 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:38.164440 kubelet[2465]: E0117 11:59:38.164408 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:39.925566 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:39.927107 sshd[1619]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:39.930206 systemd[1]: sshd@6-10.0.0.32:22-10.0.0.1:53412.service: Deactivated successfully. Jan 17 11:59:39.931879 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 11:59:39.932061 systemd[1]: session-7.scope: Consumed 6.815s CPU time, 154.5M memory peak, 0B memory swap peak. Jan 17 11:59:39.932586 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Jan 17 11:59:39.933489 systemd-logind[1424]: Removed session 7. Jan 17 11:59:40.826180 kubelet[2465]: I0117 11:59:40.826147 2465 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 11:59:40.826593 containerd[1442]: time="2025-01-17T11:59:40.826554550Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 11:59:40.826780 kubelet[2465]: I0117 11:59:40.826745 2465 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 11:59:41.644423 systemd[1]: Created slice kubepods-besteffort-pod8aed7cf6_1844_426b_bfee_d4c3d56b7a60.slice - libcontainer container kubepods-besteffort-pod8aed7cf6_1844_426b_bfee_d4c3d56b7a60.slice. Jan 17 11:59:41.670576 kubelet[2465]: I0117 11:59:41.670408 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8aed7cf6-1844-426b-bfee-d4c3d56b7a60-kube-proxy\") pod \"kube-proxy-c927b\" (UID: \"8aed7cf6-1844-426b-bfee-d4c3d56b7a60\") " pod="kube-system/kube-proxy-c927b" Jan 17 11:59:41.670576 kubelet[2465]: I0117 11:59:41.670467 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8aed7cf6-1844-426b-bfee-d4c3d56b7a60-xtables-lock\") pod \"kube-proxy-c927b\" (UID: \"8aed7cf6-1844-426b-bfee-d4c3d56b7a60\") " pod="kube-system/kube-proxy-c927b" Jan 17 11:59:41.670576 kubelet[2465]: I0117 11:59:41.670485 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cknmt\" (UniqueName: \"kubernetes.io/projected/8aed7cf6-1844-426b-bfee-d4c3d56b7a60-kube-api-access-cknmt\") pod \"kube-proxy-c927b\" (UID: \"8aed7cf6-1844-426b-bfee-d4c3d56b7a60\") " pod="kube-system/kube-proxy-c927b" Jan 17 11:59:41.670576 kubelet[2465]: I0117 11:59:41.670510 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8aed7cf6-1844-426b-bfee-d4c3d56b7a60-lib-modules\") pod \"kube-proxy-c927b\" (UID: \"8aed7cf6-1844-426b-bfee-d4c3d56b7a60\") " pod="kube-system/kube-proxy-c927b" Jan 17 11:59:41.754242 kubelet[2465]: W0117 11:59:41.753845 2465 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 17 11:59:41.754242 kubelet[2465]: E0117 11:59:41.753888 2465 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 17 11:59:41.754242 kubelet[2465]: W0117 11:59:41.753924 2465 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 17 11:59:41.754242 kubelet[2465]: E0117 11:59:41.753935 2465 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 17 11:59:41.758382 systemd[1]: Created slice kubepods-besteffort-pod2e9554f9_755f_463a_987c_ce62ec36743f.slice - libcontainer container kubepods-besteffort-pod2e9554f9_755f_463a_987c_ce62ec36743f.slice. Jan 17 11:59:41.771239 kubelet[2465]: I0117 11:59:41.771206 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e9554f9-755f-463a-987c-ce62ec36743f-var-lib-calico\") pod \"tigera-operator-76c4976dd7-6h6hb\" (UID: \"2e9554f9-755f-463a-987c-ce62ec36743f\") " pod="tigera-operator/tigera-operator-76c4976dd7-6h6hb" Jan 17 11:59:41.771601 kubelet[2465]: I0117 11:59:41.771343 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlnqs\" (UniqueName: \"kubernetes.io/projected/2e9554f9-755f-463a-987c-ce62ec36743f-kube-api-access-jlnqs\") pod \"tigera-operator-76c4976dd7-6h6hb\" (UID: \"2e9554f9-755f-463a-987c-ce62ec36743f\") " pod="tigera-operator/tigera-operator-76c4976dd7-6h6hb" Jan 17 11:59:41.957995 kubelet[2465]: E0117 11:59:41.957948 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:41.958828 containerd[1442]: time="2025-01-17T11:59:41.958471659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c927b,Uid:8aed7cf6-1844-426b-bfee-d4c3d56b7a60,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:41.976239 containerd[1442]: time="2025-01-17T11:59:41.976134274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:41.976367 containerd[1442]: time="2025-01-17T11:59:41.976222867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:41.976367 containerd[1442]: time="2025-01-17T11:59:41.976251370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:41.976367 containerd[1442]: time="2025-01-17T11:59:41.976327633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:41.998442 systemd[1]: Started cri-containerd-187b86adf6eff53df8c44178054ac6259662ebeb1c675280cd1f2bd28590d1a2.scope - libcontainer container 187b86adf6eff53df8c44178054ac6259662ebeb1c675280cd1f2bd28590d1a2. Jan 17 11:59:42.016851 containerd[1442]: time="2025-01-17T11:59:42.016761373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c927b,Uid:8aed7cf6-1844-426b-bfee-d4c3d56b7a60,Namespace:kube-system,Attempt:0,} returns sandbox id \"187b86adf6eff53df8c44178054ac6259662ebeb1c675280cd1f2bd28590d1a2\"" Jan 17 11:59:42.017582 kubelet[2465]: E0117 11:59:42.017560 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:42.020722 containerd[1442]: time="2025-01-17T11:59:42.020687309Z" level=info msg="CreateContainer within sandbox \"187b86adf6eff53df8c44178054ac6259662ebeb1c675280cd1f2bd28590d1a2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 11:59:42.038162 containerd[1442]: time="2025-01-17T11:59:42.038117035Z" level=info msg="CreateContainer within sandbox \"187b86adf6eff53df8c44178054ac6259662ebeb1c675280cd1f2bd28590d1a2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da15165bfdd6cf3c45a742e199d039841af7a3438998527059f03776d824584b\"" Jan 17 11:59:42.038913 containerd[1442]: time="2025-01-17T11:59:42.038760336Z" level=info msg="StartContainer for \"da15165bfdd6cf3c45a742e199d039841af7a3438998527059f03776d824584b\"" Jan 17 11:59:42.068479 systemd[1]: Started cri-containerd-da15165bfdd6cf3c45a742e199d039841af7a3438998527059f03776d824584b.scope - libcontainer container da15165bfdd6cf3c45a742e199d039841af7a3438998527059f03776d824584b. Jan 17 11:59:42.091478 containerd[1442]: time="2025-01-17T11:59:42.090966169Z" level=info msg="StartContainer for \"da15165bfdd6cf3c45a742e199d039841af7a3438998527059f03776d824584b\" returns successfully" Jan 17 11:59:42.094538 kubelet[2465]: E0117 11:59:42.094513 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:42.171466 kubelet[2465]: E0117 11:59:42.171431 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:42.171962 kubelet[2465]: E0117 11:59:42.171938 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:42.522479 kubelet[2465]: E0117 11:59:42.522430 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:42.541409 kubelet[2465]: I0117 11:59:42.541140 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c927b" podStartSLOduration=1.541120577 podStartE2EDuration="1.541120577s" podCreationTimestamp="2025-01-17 11:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:59:42.192494351 +0000 UTC m=+7.123016163" watchObservedRunningTime="2025-01-17 11:59:42.541120577 +0000 UTC m=+7.471642390" Jan 17 11:59:42.782387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32479341.mount: Deactivated successfully. Jan 17 11:59:42.879380 kubelet[2465]: E0117 11:59:42.879272 2465 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 11:59:42.879380 kubelet[2465]: E0117 11:59:42.879314 2465 projected.go:194] Error preparing data for projected volume kube-api-access-jlnqs for pod tigera-operator/tigera-operator-76c4976dd7-6h6hb: failed to sync configmap cache: timed out waiting for the condition Jan 17 11:59:42.879380 kubelet[2465]: E0117 11:59:42.879381 2465 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e9554f9-755f-463a-987c-ce62ec36743f-kube-api-access-jlnqs podName:2e9554f9-755f-463a-987c-ce62ec36743f nodeName:}" failed. No retries permitted until 2025-01-17 11:59:43.379359959 +0000 UTC m=+8.309881771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jlnqs" (UniqueName: "kubernetes.io/projected/2e9554f9-755f-463a-987c-ce62ec36743f-kube-api-access-jlnqs") pod "tigera-operator-76c4976dd7-6h6hb" (UID: "2e9554f9-755f-463a-987c-ce62ec36743f") : failed to sync configmap cache: timed out waiting for the condition Jan 17 11:59:43.173652 kubelet[2465]: E0117 11:59:43.173615 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:43.561900 containerd[1442]: time="2025-01-17T11:59:43.561390017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-6h6hb,Uid:2e9554f9-755f-463a-987c-ce62ec36743f,Namespace:tigera-operator,Attempt:0,}" Jan 17 11:59:43.600605 containerd[1442]: time="2025-01-17T11:59:43.600532405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:43.600846 containerd[1442]: time="2025-01-17T11:59:43.600769700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:43.600846 containerd[1442]: time="2025-01-17T11:59:43.600823379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:43.601103 containerd[1442]: time="2025-01-17T11:59:43.601069641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:43.624328 systemd[1]: Started cri-containerd-ce2d2af2b80dcbcf90d73e199ca300be9e36fe7fafc785c708f463f7d5478200.scope - libcontainer container ce2d2af2b80dcbcf90d73e199ca300be9e36fe7fafc785c708f463f7d5478200. Jan 17 11:59:43.649987 containerd[1442]: time="2025-01-17T11:59:43.649486980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-6h6hb,Uid:2e9554f9-755f-463a-987c-ce62ec36743f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ce2d2af2b80dcbcf90d73e199ca300be9e36fe7fafc785c708f463f7d5478200\"" Jan 17 11:59:43.651171 containerd[1442]: time="2025-01-17T11:59:43.651144640Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 11:59:44.880351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861015906.mount: Deactivated successfully. Jan 17 11:59:45.128212 containerd[1442]: time="2025-01-17T11:59:45.128088853Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:45.129081 containerd[1442]: time="2025-01-17T11:59:45.129051529Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125968" Jan 17 11:59:45.129663 containerd[1442]: time="2025-01-17T11:59:45.129617623Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:45.134111 containerd[1442]: time="2025-01-17T11:59:45.133254425Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.482071597s" Jan 17 11:59:45.134111 containerd[1442]: time="2025-01-17T11:59:45.133292170Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 17 11:59:45.134111 containerd[1442]: time="2025-01-17T11:59:45.133451435Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:45.139704 containerd[1442]: time="2025-01-17T11:59:45.139673264Z" level=info msg="CreateContainer within sandbox \"ce2d2af2b80dcbcf90d73e199ca300be9e36fe7fafc785c708f463f7d5478200\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 11:59:45.149719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891898460.mount: Deactivated successfully. Jan 17 11:59:45.195013 containerd[1442]: time="2025-01-17T11:59:45.194949493Z" level=info msg="CreateContainer within sandbox \"ce2d2af2b80dcbcf90d73e199ca300be9e36fe7fafc785c708f463f7d5478200\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"50f4aa81c4ef85376ed0f4d3d48905c3e5d51ce26afb71e8ae522a73e705ae8c\"" Jan 17 11:59:45.195317 containerd[1442]: time="2025-01-17T11:59:45.195281552Z" level=info msg="StartContainer for \"50f4aa81c4ef85376ed0f4d3d48905c3e5d51ce26afb71e8ae522a73e705ae8c\"" Jan 17 11:59:45.224352 systemd[1]: Started cri-containerd-50f4aa81c4ef85376ed0f4d3d48905c3e5d51ce26afb71e8ae522a73e705ae8c.scope - libcontainer container 50f4aa81c4ef85376ed0f4d3d48905c3e5d51ce26afb71e8ae522a73e705ae8c. Jan 17 11:59:45.245047 containerd[1442]: time="2025-01-17T11:59:45.245006875Z" level=info msg="StartContainer for \"50f4aa81c4ef85376ed0f4d3d48905c3e5d51ce26afb71e8ae522a73e705ae8c\" returns successfully" Jan 17 11:59:48.050911 kubelet[2465]: E0117 11:59:48.050865 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:48.061138 kubelet[2465]: I0117 11:59:48.059231 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-6h6hb" podStartSLOduration=5.571260865 podStartE2EDuration="7.059218805s" podCreationTimestamp="2025-01-17 11:59:41 +0000 UTC" firstStartedPulling="2025-01-17 11:59:43.650446286 +0000 UTC m=+8.580968098" lastFinishedPulling="2025-01-17 11:59:45.138404226 +0000 UTC m=+10.068926038" observedRunningTime="2025-01-17 11:59:46.20958656 +0000 UTC m=+11.140108372" watchObservedRunningTime="2025-01-17 11:59:48.059218805 +0000 UTC m=+12.989740617" Jan 17 11:59:48.195471 kubelet[2465]: E0117 11:59:48.195402 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:49.100243 systemd[1]: Created slice kubepods-besteffort-pod16c9faeb_d89e_429f_b1e0_67d33c1213c5.slice - libcontainer container kubepods-besteffort-pod16c9faeb_d89e_429f_b1e0_67d33c1213c5.slice. Jan 17 11:59:49.109429 systemd[1]: Created slice kubepods-besteffort-podbb86dd8d_efb4_40f8_937f_fe09c2855260.slice - libcontainer container kubepods-besteffort-podbb86dd8d_efb4_40f8_937f_fe09c2855260.slice. Jan 17 11:59:49.124360 kubelet[2465]: I0117 11:59:49.124327 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpwv\" (UniqueName: \"kubernetes.io/projected/16c9faeb-d89e-429f-b1e0-67d33c1213c5-kube-api-access-hnpwv\") pod \"calico-typha-5fb854695c-dd8ph\" (UID: \"16c9faeb-d89e-429f-b1e0-67d33c1213c5\") " pod="calico-system/calico-typha-5fb854695c-dd8ph" Jan 17 11:59:49.125102 kubelet[2465]: I0117 11:59:49.125036 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c9faeb-d89e-429f-b1e0-67d33c1213c5-tigera-ca-bundle\") pod \"calico-typha-5fb854695c-dd8ph\" (UID: \"16c9faeb-d89e-429f-b1e0-67d33c1213c5\") " pod="calico-system/calico-typha-5fb854695c-dd8ph" Jan 17 11:59:49.125102 kubelet[2465]: I0117 11:59:49.125069 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/16c9faeb-d89e-429f-b1e0-67d33c1213c5-typha-certs\") pod \"calico-typha-5fb854695c-dd8ph\" (UID: \"16c9faeb-d89e-429f-b1e0-67d33c1213c5\") " pod="calico-system/calico-typha-5fb854695c-dd8ph" Jan 17 11:59:49.167074 kubelet[2465]: E0117 11:59:49.165290 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7bcf" podUID="074801cb-bd28-41a4-b464-ef5bfb657c08" Jan 17 11:59:49.227259 kubelet[2465]: I0117 11:59:49.226138 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-xtables-lock\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227259 kubelet[2465]: I0117 11:59:49.226207 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-policysync\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227259 kubelet[2465]: I0117 11:59:49.226362 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-var-lib-calico\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227259 kubelet[2465]: I0117 11:59:49.226398 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-cni-bin-dir\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227259 kubelet[2465]: I0117 11:59:49.226428 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-var-run-calico\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227644 kubelet[2465]: I0117 11:59:49.226443 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-cni-net-dir\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227644 kubelet[2465]: I0117 11:59:49.226459 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-lib-modules\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227644 kubelet[2465]: I0117 11:59:49.226484 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb86dd8d-efb4-40f8-937f-fe09c2855260-tigera-ca-bundle\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227644 kubelet[2465]: I0117 11:59:49.226502 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-cni-log-dir\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227644 kubelet[2465]: I0117 11:59:49.226516 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb86dd8d-efb4-40f8-937f-fe09c2855260-flexvol-driver-host\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227806 kubelet[2465]: I0117 11:59:49.226536 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9xd\" (UniqueName: \"kubernetes.io/projected/bb86dd8d-efb4-40f8-937f-fe09c2855260-kube-api-access-5h9xd\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.227806 kubelet[2465]: I0117 11:59:49.226552 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb86dd8d-efb4-40f8-937f-fe09c2855260-node-certs\") pod \"calico-node-zftkz\" (UID: \"bb86dd8d-efb4-40f8-937f-fe09c2855260\") " pod="calico-system/calico-node-zftkz" Jan 17 11:59:49.327033 kubelet[2465]: I0117 11:59:49.326991 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9kwh\" (UniqueName: \"kubernetes.io/projected/074801cb-bd28-41a4-b464-ef5bfb657c08-kube-api-access-d9kwh\") pod \"csi-node-driver-j7bcf\" (UID: \"074801cb-bd28-41a4-b464-ef5bfb657c08\") " pod="calico-system/csi-node-driver-j7bcf" Jan 17 11:59:49.327297 kubelet[2465]: I0117 11:59:49.327277 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/074801cb-bd28-41a4-b464-ef5bfb657c08-varrun\") pod \"csi-node-driver-j7bcf\" (UID: \"074801cb-bd28-41a4-b464-ef5bfb657c08\") " pod="calico-system/csi-node-driver-j7bcf" Jan 17 11:59:49.327384 kubelet[2465]: I0117 11:59:49.327371 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/074801cb-bd28-41a4-b464-ef5bfb657c08-socket-dir\") pod \"csi-node-driver-j7bcf\" (UID: \"074801cb-bd28-41a4-b464-ef5bfb657c08\") " pod="calico-system/csi-node-driver-j7bcf" Jan 17 11:59:49.327452 kubelet[2465]: I0117 11:59:49.327440 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/074801cb-bd28-41a4-b464-ef5bfb657c08-registration-dir\") pod \"csi-node-driver-j7bcf\" (UID: \"074801cb-bd28-41a4-b464-ef5bfb657c08\") " pod="calico-system/csi-node-driver-j7bcf" Jan 17 11:59:49.327607 kubelet[2465]: I0117 11:59:49.327592 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/074801cb-bd28-41a4-b464-ef5bfb657c08-kubelet-dir\") pod \"csi-node-driver-j7bcf\" (UID: \"074801cb-bd28-41a4-b464-ef5bfb657c08\") " pod="calico-system/csi-node-driver-j7bcf" Jan 17 11:59:49.334154 kubelet[2465]: E0117 11:59:49.334122 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.334154 kubelet[2465]: W0117 11:59:49.334151 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.334325 kubelet[2465]: E0117 11:59:49.334308 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.335942 kubelet[2465]: E0117 11:59:49.335925 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.336077 kubelet[2465]: W0117 11:59:49.336028 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.336077 kubelet[2465]: E0117 11:59:49.336049 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.338036 kubelet[2465]: E0117 11:59:49.337999 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.338036 kubelet[2465]: W0117 11:59:49.338018 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.338036 kubelet[2465]: E0117 11:59:49.338032 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.405994 kubelet[2465]: E0117 11:59:49.405961 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:49.407280 containerd[1442]: time="2025-01-17T11:59:49.406627212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb854695c-dd8ph,Uid:16c9faeb-d89e-429f-b1e0-67d33c1213c5,Namespace:calico-system,Attempt:0,}" Jan 17 11:59:49.414242 kubelet[2465]: E0117 11:59:49.414125 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:49.414689 containerd[1442]: time="2025-01-17T11:59:49.414644421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zftkz,Uid:bb86dd8d-efb4-40f8-937f-fe09c2855260,Namespace:calico-system,Attempt:0,}" Jan 17 11:59:49.428568 kubelet[2465]: E0117 11:59:49.428531 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.428568 kubelet[2465]: W0117 11:59:49.428554 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.428568 kubelet[2465]: E0117 11:59:49.428572 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.428816 kubelet[2465]: E0117 11:59:49.428802 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.428816 kubelet[2465]: W0117 11:59:49.428812 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.428866 kubelet[2465]: E0117 11:59:49.428825 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.428996 kubelet[2465]: E0117 11:59:49.428984 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.429020 kubelet[2465]: W0117 11:59:49.428996 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.429020 kubelet[2465]: E0117 11:59:49.429006 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.429235 kubelet[2465]: E0117 11:59:49.429222 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.429266 kubelet[2465]: W0117 11:59:49.429237 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.429266 kubelet[2465]: E0117 11:59:49.429247 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.429500 kubelet[2465]: E0117 11:59:49.429426 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.429500 kubelet[2465]: W0117 11:59:49.429493 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.429552 kubelet[2465]: E0117 11:59:49.429515 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.429952 kubelet[2465]: E0117 11:59:49.429936 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.429952 kubelet[2465]: W0117 11:59:49.429951 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.430027 kubelet[2465]: E0117 11:59:49.429966 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.430124 kubelet[2465]: E0117 11:59:49.430108 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.430124 kubelet[2465]: W0117 11:59:49.430118 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.430240 kubelet[2465]: E0117 11:59:49.430172 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.430345 kubelet[2465]: E0117 11:59:49.430333 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.430345 kubelet[2465]: W0117 11:59:49.430344 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.430402 kubelet[2465]: E0117 11:59:49.430365 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.430525 kubelet[2465]: E0117 11:59:49.430507 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.430525 kubelet[2465]: W0117 11:59:49.430517 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.430617 kubelet[2465]: E0117 11:59:49.430541 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.430717 kubelet[2465]: E0117 11:59:49.430699 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.430717 kubelet[2465]: W0117 11:59:49.430711 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.430773 kubelet[2465]: E0117 11:59:49.430731 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.430920 kubelet[2465]: E0117 11:59:49.430896 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.430920 kubelet[2465]: W0117 11:59:49.430909 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.431014 kubelet[2465]: E0117 11:59:49.430928 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.431098 kubelet[2465]: E0117 11:59:49.431065 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.431098 kubelet[2465]: W0117 11:59:49.431073 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.431098 kubelet[2465]: E0117 11:59:49.431087 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.431374 kubelet[2465]: E0117 11:59:49.431355 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.431374 kubelet[2465]: W0117 11:59:49.431369 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.431445 kubelet[2465]: E0117 11:59:49.431391 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.432817 kubelet[2465]: E0117 11:59:49.431613 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.432817 kubelet[2465]: W0117 11:59:49.431625 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.432817 kubelet[2465]: E0117 11:59:49.431645 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.432817 kubelet[2465]: E0117 11:59:49.431836 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.432817 kubelet[2465]: W0117 11:59:49.431844 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.432817 kubelet[2465]: E0117 11:59:49.431870 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.432817 kubelet[2465]: E0117 11:59:49.432540 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.432817 kubelet[2465]: W0117 11:59:49.432552 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.432817 kubelet[2465]: E0117 11:59:49.432629 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.433067 kubelet[2465]: E0117 11:59:49.432871 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.433067 kubelet[2465]: W0117 11:59:49.432881 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.433262 kubelet[2465]: E0117 11:59:49.433242 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.434240 kubelet[2465]: E0117 11:59:49.434217 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.434240 kubelet[2465]: W0117 11:59:49.434234 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.434855 kubelet[2465]: E0117 11:59:49.434283 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.434855 kubelet[2465]: E0117 11:59:49.434416 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.434855 kubelet[2465]: W0117 11:59:49.434424 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.434855 kubelet[2465]: E0117 11:59:49.434453 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.434855 kubelet[2465]: E0117 11:59:49.434725 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.434855 kubelet[2465]: W0117 11:59:49.434737 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.434855 kubelet[2465]: E0117 11:59:49.434777 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.436071 kubelet[2465]: E0117 11:59:49.436043 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.436071 kubelet[2465]: W0117 11:59:49.436061 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.436162 kubelet[2465]: E0117 11:59:49.436101 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.439259 kubelet[2465]: E0117 11:59:49.437767 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.439259 kubelet[2465]: W0117 11:59:49.437808 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.439259 kubelet[2465]: E0117 11:59:49.437870 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.439259 kubelet[2465]: E0117 11:59:49.438393 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.439259 kubelet[2465]: W0117 11:59:49.438404 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.439259 kubelet[2465]: E0117 11:59:49.438471 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.439259 kubelet[2465]: E0117 11:59:49.438671 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.439259 kubelet[2465]: W0117 11:59:49.438678 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.439259 kubelet[2465]: E0117 11:59:49.438728 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.439259 kubelet[2465]: E0117 11:59:49.438860 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.439750 kubelet[2465]: W0117 11:59:49.438867 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.439750 kubelet[2465]: E0117 11:59:49.438874 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.451916 kubelet[2465]: E0117 11:59:49.451760 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 11:59:49.451916 kubelet[2465]: W0117 11:59:49.451808 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 11:59:49.451916 kubelet[2465]: E0117 11:59:49.451827 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 11:59:49.454985 containerd[1442]: time="2025-01-17T11:59:49.454463042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:49.454985 containerd[1442]: time="2025-01-17T11:59:49.454538442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:49.454985 containerd[1442]: time="2025-01-17T11:59:49.454563576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:49.455304 containerd[1442]: time="2025-01-17T11:59:49.454669472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:49.462536 containerd[1442]: time="2025-01-17T11:59:49.462430624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:49.462536 containerd[1442]: time="2025-01-17T11:59:49.462504223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:49.462717 containerd[1442]: time="2025-01-17T11:59:49.462542044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:49.462717 containerd[1442]: time="2025-01-17T11:59:49.462664029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:49.474331 systemd[1]: Started cri-containerd-8e0aa041ef3b0a3dc385df6386e6466bb534d9069726ebe618c6519e6223cdbf.scope - libcontainer container 8e0aa041ef3b0a3dc385df6386e6466bb534d9069726ebe618c6519e6223cdbf. Jan 17 11:59:49.477288 systemd[1]: Started cri-containerd-cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3.scope - libcontainer container cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3. Jan 17 11:59:49.496387 containerd[1442]: time="2025-01-17T11:59:49.496347968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zftkz,Uid:bb86dd8d-efb4-40f8-937f-fe09c2855260,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3\"" Jan 17 11:59:49.497153 kubelet[2465]: E0117 11:59:49.497128 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:49.500006 containerd[1442]: time="2025-01-17T11:59:49.499874494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 11:59:49.516634 containerd[1442]: time="2025-01-17T11:59:49.516572107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb854695c-dd8ph,Uid:16c9faeb-d89e-429f-b1e0-67d33c1213c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e0aa041ef3b0a3dc385df6386e6466bb534d9069726ebe618c6519e6223cdbf\"" Jan 17 11:59:49.517349 kubelet[2465]: E0117 11:59:49.517320 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:50.129064 update_engine[1427]: I20250117 11:59:50.128422 1427 update_attempter.cc:509] Updating boot flags... Jan 17 11:59:50.155229 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2978) Jan 17 11:59:50.193239 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2979) Jan 17 11:59:50.580354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143661761.mount: Deactivated successfully. Jan 17 11:59:50.873166 containerd[1442]: time="2025-01-17T11:59:50.873041671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:50.874745 containerd[1442]: time="2025-01-17T11:59:50.874587417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 17 11:59:50.875506 containerd[1442]: time="2025-01-17T11:59:50.875461101Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:50.878068 containerd[1442]: time="2025-01-17T11:59:50.878030847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:50.879543 containerd[1442]: time="2025-01-17T11:59:50.879418633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.379496874s" Jan 17 11:59:50.879543 containerd[1442]: time="2025-01-17T11:59:50.879456372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 17 11:59:50.880537 containerd[1442]: time="2025-01-17T11:59:50.880414419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 11:59:50.882579 containerd[1442]: time="2025-01-17T11:59:50.882151502Z" level=info msg="CreateContainer within sandbox \"cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 11:59:50.920765 containerd[1442]: time="2025-01-17T11:59:50.920639345Z" level=info msg="CreateContainer within sandbox \"cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2b3aa2057917e2c21445f4440b4c8e0eb1606f0299aa080dd8592c53bb38ac52\"" Jan 17 11:59:50.921466 containerd[1442]: time="2025-01-17T11:59:50.921438271Z" level=info msg="StartContainer for \"2b3aa2057917e2c21445f4440b4c8e0eb1606f0299aa080dd8592c53bb38ac52\"" Jan 17 11:59:50.949368 systemd[1]: Started cri-containerd-2b3aa2057917e2c21445f4440b4c8e0eb1606f0299aa080dd8592c53bb38ac52.scope - libcontainer container 2b3aa2057917e2c21445f4440b4c8e0eb1606f0299aa080dd8592c53bb38ac52. Jan 17 11:59:50.986267 containerd[1442]: time="2025-01-17T11:59:50.986182381Z" level=info msg="StartContainer for \"2b3aa2057917e2c21445f4440b4c8e0eb1606f0299aa080dd8592c53bb38ac52\" returns successfully" Jan 17 11:59:51.022976 systemd[1]: cri-containerd-2b3aa2057917e2c21445f4440b4c8e0eb1606f0299aa080dd8592c53bb38ac52.scope: Deactivated successfully. Jan 17 11:59:51.054214 containerd[1442]: time="2025-01-17T11:59:51.050437131Z" level=info msg="shim disconnected" id=2b3aa2057917e2c21445f4440b4c8e0eb1606f0299aa080dd8592c53bb38ac52 namespace=k8s.io Jan 17 11:59:51.054214 containerd[1442]: time="2025-01-17T11:59:51.054209794Z" level=warning msg="cleaning up after shim disconnected" id=2b3aa2057917e2c21445f4440b4c8e0eb1606f0299aa080dd8592c53bb38ac52 namespace=k8s.io Jan 17 11:59:51.054478 containerd[1442]: time="2025-01-17T11:59:51.054227003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:59:51.140533 kubelet[2465]: E0117 11:59:51.140182 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7bcf" podUID="074801cb-bd28-41a4-b464-ef5bfb657c08" Jan 17 11:59:51.207922 kubelet[2465]: E0117 11:59:51.207890 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:52.237967 containerd[1442]: time="2025-01-17T11:59:52.237924523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:52.239069 containerd[1442]: time="2025-01-17T11:59:52.238996576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 17 11:59:52.239959 containerd[1442]: time="2025-01-17T11:59:52.239921161Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:52.241948 containerd[1442]: time="2025-01-17T11:59:52.241923802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:52.242795 containerd[1442]: time="2025-01-17T11:59:52.242544248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.362064596s" Jan 17 11:59:52.242795 containerd[1442]: time="2025-01-17T11:59:52.242575342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 17 11:59:52.243680 containerd[1442]: time="2025-01-17T11:59:52.243649796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 11:59:52.251778 containerd[1442]: time="2025-01-17T11:59:52.251734755Z" level=info msg="CreateContainer within sandbox \"8e0aa041ef3b0a3dc385df6386e6466bb534d9069726ebe618c6519e6223cdbf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 11:59:52.265239 containerd[1442]: time="2025-01-17T11:59:52.265204669Z" level=info msg="CreateContainer within sandbox \"8e0aa041ef3b0a3dc385df6386e6466bb534d9069726ebe618c6519e6223cdbf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"637fc22249367878c8254f843466edf8fd5cb8f910340431002f8095de03bbf9\"" Jan 17 11:59:52.266943 containerd[1442]: time="2025-01-17T11:59:52.266280204Z" level=info msg="StartContainer for \"637fc22249367878c8254f843466edf8fd5cb8f910340431002f8095de03bbf9\"" Jan 17 11:59:52.297335 systemd[1]: Started cri-containerd-637fc22249367878c8254f843466edf8fd5cb8f910340431002f8095de03bbf9.scope - libcontainer container 637fc22249367878c8254f843466edf8fd5cb8f910340431002f8095de03bbf9. Jan 17 11:59:52.333993 containerd[1442]: time="2025-01-17T11:59:52.333950846Z" level=info msg="StartContainer for \"637fc22249367878c8254f843466edf8fd5cb8f910340431002f8095de03bbf9\" returns successfully" Jan 17 11:59:53.139809 kubelet[2465]: E0117 11:59:53.139758 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7bcf" podUID="074801cb-bd28-41a4-b464-ef5bfb657c08" Jan 17 11:59:53.213514 kubelet[2465]: E0117 11:59:53.213152 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:53.224931 kubelet[2465]: I0117 11:59:53.224627 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fb854695c-dd8ph" podStartSLOduration=1.49905425 podStartE2EDuration="4.224613134s" podCreationTimestamp="2025-01-17 11:59:49 +0000 UTC" firstStartedPulling="2025-01-17 11:59:49.517997309 +0000 UTC m=+14.448519121" lastFinishedPulling="2025-01-17 11:59:52.243556193 +0000 UTC m=+17.174078005" observedRunningTime="2025-01-17 11:59:53.223328892 +0000 UTC m=+18.153850704" watchObservedRunningTime="2025-01-17 11:59:53.224613134 +0000 UTC m=+18.155134946" Jan 17 11:59:54.214892 kubelet[2465]: I0117 11:59:54.214853 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 11:59:54.215486 kubelet[2465]: E0117 11:59:54.215466 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:55.139876 kubelet[2465]: E0117 11:59:55.139819 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7bcf" podUID="074801cb-bd28-41a4-b464-ef5bfb657c08" Jan 17 11:59:55.740121 containerd[1442]: time="2025-01-17T11:59:55.740080753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:55.741218 containerd[1442]: time="2025-01-17T11:59:55.741112563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 17 11:59:55.742398 containerd[1442]: time="2025-01-17T11:59:55.742344254Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:55.744253 containerd[1442]: time="2025-01-17T11:59:55.744216039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:55.745296 containerd[1442]: time="2025-01-17T11:59:55.745247209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.501456228s" Jan 17 11:59:55.745296 containerd[1442]: time="2025-01-17T11:59:55.745275500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 17 11:59:55.748408 containerd[1442]: time="2025-01-17T11:59:55.747918512Z" level=info msg="CreateContainer within sandbox \"cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 11:59:55.760504 containerd[1442]: time="2025-01-17T11:59:55.760457024Z" level=info msg="CreateContainer within sandbox \"cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f\"" Jan 17 11:59:55.761776 containerd[1442]: time="2025-01-17T11:59:55.761237014Z" level=info msg="StartContainer for \"bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f\"" Jan 17 11:59:55.792396 systemd[1]: Started cri-containerd-bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f.scope - libcontainer container bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f. Jan 17 11:59:55.816949 containerd[1442]: time="2025-01-17T11:59:55.816909415Z" level=info msg="StartContainer for \"bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f\" returns successfully" Jan 17 11:59:56.221224 kubelet[2465]: E0117 11:59:56.220621 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:56.270008 systemd[1]: cri-containerd-bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f.scope: Deactivated successfully. Jan 17 11:59:56.288600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f-rootfs.mount: Deactivated successfully. Jan 17 11:59:56.356323 kubelet[2465]: I0117 11:59:56.356278 2465 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 11:59:56.403575 systemd[1]: Created slice kubepods-burstable-pod9990c74b_816e_4bf1_9470_a9d91243af45.slice - libcontainer container kubepods-burstable-pod9990c74b_816e_4bf1_9470_a9d91243af45.slice. Jan 17 11:59:56.410087 containerd[1442]: time="2025-01-17T11:59:56.409215565Z" level=info msg="shim disconnected" id=bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f namespace=k8s.io Jan 17 11:59:56.410087 containerd[1442]: time="2025-01-17T11:59:56.409271866Z" level=warning msg="cleaning up after shim disconnected" id=bc842e1c707ed2db5eca10f9259e656dee556c5485e1df36c37f760e3add116f namespace=k8s.io Jan 17 11:59:56.410087 containerd[1442]: time="2025-01-17T11:59:56.409280470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:59:56.410292 systemd[1]: Created slice kubepods-besteffort-pod6c20dd64_894e_4ff1_b1cd_c8495df31316.slice - libcontainer container kubepods-besteffort-pod6c20dd64_894e_4ff1_b1cd_c8495df31316.slice. Jan 17 11:59:56.416539 systemd[1]: Created slice kubepods-burstable-podbf214c06_9b1d_4dac_a1ec_12f7f34d3261.slice - libcontainer container kubepods-burstable-podbf214c06_9b1d_4dac_a1ec_12f7f34d3261.slice. Jan 17 11:59:56.425364 systemd[1]: Created slice kubepods-besteffort-pod8e3c1cb9_32d1_4d37_bc9c_8ecf060d43d5.slice - libcontainer container kubepods-besteffort-pod8e3c1cb9_32d1_4d37_bc9c_8ecf060d43d5.slice. Jan 17 11:59:56.431294 systemd[1]: Created slice kubepods-besteffort-pod5dfed3bd_1c28_4d8c_bf55_5f0787bcf7c5.slice - libcontainer container kubepods-besteffort-pod5dfed3bd_1c28_4d8c_bf55_5f0787bcf7c5.slice. Jan 17 11:59:56.479567 kubelet[2465]: I0117 11:59:56.478853 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6c20dd64-894e-4ff1-b1cd-c8495df31316-calico-apiserver-certs\") pod \"calico-apiserver-7d954f8cd6-mxjf9\" (UID: \"6c20dd64-894e-4ff1-b1cd-c8495df31316\") " pod="calico-apiserver/calico-apiserver-7d954f8cd6-mxjf9" Jan 17 11:59:56.479567 kubelet[2465]: I0117 11:59:56.478903 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9990c74b-816e-4bf1-9470-a9d91243af45-config-volume\") pod \"coredns-6f6b679f8f-2qxc4\" (UID: \"9990c74b-816e-4bf1-9470-a9d91243af45\") " pod="kube-system/coredns-6f6b679f8f-2qxc4" Jan 17 11:59:56.479567 kubelet[2465]: I0117 11:59:56.478923 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5-calico-apiserver-certs\") pod \"calico-apiserver-7d954f8cd6-tmq7s\" (UID: \"8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5\") " pod="calico-apiserver/calico-apiserver-7d954f8cd6-tmq7s" Jan 17 11:59:56.479567 kubelet[2465]: I0117 11:59:56.478944 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk9kf\" (UniqueName: \"kubernetes.io/projected/bf214c06-9b1d-4dac-a1ec-12f7f34d3261-kube-api-access-hk9kf\") pod \"coredns-6f6b679f8f-28s6w\" (UID: \"bf214c06-9b1d-4dac-a1ec-12f7f34d3261\") " pod="kube-system/coredns-6f6b679f8f-28s6w" Jan 17 11:59:56.479567 kubelet[2465]: I0117 11:59:56.478964 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfltt\" (UniqueName: \"kubernetes.io/projected/6c20dd64-894e-4ff1-b1cd-c8495df31316-kube-api-access-xfltt\") pod \"calico-apiserver-7d954f8cd6-mxjf9\" (UID: \"6c20dd64-894e-4ff1-b1cd-c8495df31316\") " pod="calico-apiserver/calico-apiserver-7d954f8cd6-mxjf9" Jan 17 11:59:56.479791 kubelet[2465]: I0117 11:59:56.478984 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkqn4\" (UniqueName: \"kubernetes.io/projected/9990c74b-816e-4bf1-9470-a9d91243af45-kube-api-access-nkqn4\") pod \"coredns-6f6b679f8f-2qxc4\" (UID: \"9990c74b-816e-4bf1-9470-a9d91243af45\") " pod="kube-system/coredns-6f6b679f8f-2qxc4" Jan 17 11:59:56.479791 kubelet[2465]: I0117 11:59:56.479003 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf214c06-9b1d-4dac-a1ec-12f7f34d3261-config-volume\") pod \"coredns-6f6b679f8f-28s6w\" (UID: \"bf214c06-9b1d-4dac-a1ec-12f7f34d3261\") " pod="kube-system/coredns-6f6b679f8f-28s6w" Jan 17 11:59:56.479791 kubelet[2465]: I0117 11:59:56.479018 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27mj\" (UniqueName: \"kubernetes.io/projected/8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5-kube-api-access-p27mj\") pod \"calico-apiserver-7d954f8cd6-tmq7s\" (UID: \"8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5\") " pod="calico-apiserver/calico-apiserver-7d954f8cd6-tmq7s" Jan 17 11:59:56.579933 kubelet[2465]: I0117 11:59:56.579484 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5-tigera-ca-bundle\") pod \"calico-kube-controllers-5d976f8577-5mqh2\" (UID: \"5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5\") " pod="calico-system/calico-kube-controllers-5d976f8577-5mqh2" Jan 17 11:59:56.579933 kubelet[2465]: I0117 11:59:56.579585 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26h76\" (UniqueName: \"kubernetes.io/projected/5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5-kube-api-access-26h76\") pod \"calico-kube-controllers-5d976f8577-5mqh2\" (UID: \"5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5\") " pod="calico-system/calico-kube-controllers-5d976f8577-5mqh2" Jan 17 11:59:56.707854 kubelet[2465]: E0117 11:59:56.707304 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:56.708195 containerd[1442]: time="2025-01-17T11:59:56.708143427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2qxc4,Uid:9990c74b-816e-4bf1-9470-a9d91243af45,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:56.715284 containerd[1442]: time="2025-01-17T11:59:56.715240844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d954f8cd6-mxjf9,Uid:6c20dd64-894e-4ff1-b1cd-c8495df31316,Namespace:calico-apiserver,Attempt:0,}" Jan 17 11:59:56.723101 kubelet[2465]: E0117 11:59:56.722928 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:56.723972 containerd[1442]: time="2025-01-17T11:59:56.723435958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-28s6w,Uid:bf214c06-9b1d-4dac-a1ec-12f7f34d3261,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:56.728965 containerd[1442]: time="2025-01-17T11:59:56.728927645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d954f8cd6-tmq7s,Uid:8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5,Namespace:calico-apiserver,Attempt:0,}" Jan 17 11:59:56.736288 containerd[1442]: time="2025-01-17T11:59:56.735925383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d976f8577-5mqh2,Uid:5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5,Namespace:calico-system,Attempt:0,}" Jan 17 11:59:57.167027 systemd[1]: Created slice kubepods-besteffort-pod074801cb_bd28_41a4_b464_ef5bfb657c08.slice - libcontainer container kubepods-besteffort-pod074801cb_bd28_41a4_b464_ef5bfb657c08.slice. Jan 17 11:59:57.170258 containerd[1442]: time="2025-01-17T11:59:57.170207735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7bcf,Uid:074801cb-bd28-41a4-b464-ef5bfb657c08,Namespace:calico-system,Attempt:0,}" Jan 17 11:59:57.227952 kubelet[2465]: E0117 11:59:57.227911 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:57.230364 containerd[1442]: time="2025-01-17T11:59:57.230321476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 11:59:57.253972 containerd[1442]: time="2025-01-17T11:59:57.253924004Z" level=error msg="Failed to destroy network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.255158 containerd[1442]: time="2025-01-17T11:59:57.255123479Z" level=error msg="encountered an error cleaning up failed sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.255222 containerd[1442]: time="2025-01-17T11:59:57.255196466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d954f8cd6-tmq7s,Uid:8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.256453 kubelet[2465]: E0117 11:59:57.256396 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.256551 kubelet[2465]: E0117 11:59:57.256481 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d954f8cd6-tmq7s" Jan 17 11:59:57.256551 kubelet[2465]: E0117 11:59:57.256502 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d954f8cd6-tmq7s" Jan 17 11:59:57.256598 kubelet[2465]: E0117 11:59:57.256548 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d954f8cd6-tmq7s_calico-apiserver(8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d954f8cd6-tmq7s_calico-apiserver(8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d954f8cd6-tmq7s" podUID="8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5" Jan 17 11:59:57.272478 containerd[1442]: time="2025-01-17T11:59:57.272424840Z" level=error msg="Failed to destroy network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.272955 containerd[1442]: time="2025-01-17T11:59:57.272919139Z" level=error msg="Failed to destroy network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.273498 containerd[1442]: time="2025-01-17T11:59:57.273461816Z" level=error msg="encountered an error cleaning up failed sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.273651 containerd[1442]: time="2025-01-17T11:59:57.273626796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2qxc4,Uid:9990c74b-816e-4bf1-9470-a9d91243af45,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.273945 kubelet[2465]: E0117 11:59:57.273899 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.274042 kubelet[2465]: E0117 11:59:57.273961 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2qxc4" Jan 17 11:59:57.274042 kubelet[2465]: E0117 11:59:57.273980 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2qxc4" Jan 17 11:59:57.274095 kubelet[2465]: E0117 11:59:57.274048 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2qxc4_kube-system(9990c74b-816e-4bf1-9470-a9d91243af45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2qxc4_kube-system(9990c74b-816e-4bf1-9470-a9d91243af45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2qxc4" podUID="9990c74b-816e-4bf1-9470-a9d91243af45" Jan 17 11:59:57.275161 containerd[1442]: time="2025-01-17T11:59:57.275108014Z" level=error msg="encountered an error cleaning up failed sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.275248 containerd[1442]: time="2025-01-17T11:59:57.275179240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-28s6w,Uid:bf214c06-9b1d-4dac-a1ec-12f7f34d3261,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.275586 kubelet[2465]: E0117 11:59:57.275399 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.275586 kubelet[2465]: E0117 11:59:57.275455 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-28s6w" Jan 17 11:59:57.275586 kubelet[2465]: E0117 11:59:57.275472 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-28s6w" Jan 17 11:59:57.275682 kubelet[2465]: E0117 11:59:57.275510 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-28s6w_kube-system(bf214c06-9b1d-4dac-a1ec-12f7f34d3261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-28s6w_kube-system(bf214c06-9b1d-4dac-a1ec-12f7f34d3261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-28s6w" podUID="bf214c06-9b1d-4dac-a1ec-12f7f34d3261" Jan 17 11:59:57.281084 containerd[1442]: time="2025-01-17T11:59:57.280960738Z" level=error msg="Failed to destroy network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.281541 containerd[1442]: time="2025-01-17T11:59:57.281507016Z" level=error msg="encountered an error cleaning up failed sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.281659 containerd[1442]: time="2025-01-17T11:59:57.281636303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d954f8cd6-mxjf9,Uid:6c20dd64-894e-4ff1-b1cd-c8495df31316,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.281984 containerd[1442]: time="2025-01-17T11:59:57.281782116Z" level=error msg="Failed to destroy network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.282064 kubelet[2465]: E0117 11:59:57.281943 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.282166 kubelet[2465]: E0117 11:59:57.282148 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d954f8cd6-mxjf9" Jan 17 11:59:57.282247 kubelet[2465]: E0117 11:59:57.282232 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d954f8cd6-mxjf9" Jan 17 11:59:57.282367 kubelet[2465]: E0117 11:59:57.282336 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d954f8cd6-mxjf9_calico-apiserver(6c20dd64-894e-4ff1-b1cd-c8495df31316)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d954f8cd6-mxjf9_calico-apiserver(6c20dd64-894e-4ff1-b1cd-c8495df31316)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d954f8cd6-mxjf9" podUID="6c20dd64-894e-4ff1-b1cd-c8495df31316" Jan 17 11:59:57.283080 containerd[1442]: time="2025-01-17T11:59:57.282944298Z" level=error msg="encountered an error cleaning up failed sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.283168 containerd[1442]: time="2025-01-17T11:59:57.283107277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d976f8577-5mqh2,Uid:5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.283470 kubelet[2465]: E0117 11:59:57.283333 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.283470 kubelet[2465]: E0117 11:59:57.283379 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d976f8577-5mqh2" Jan 17 11:59:57.283470 kubelet[2465]: E0117 11:59:57.283394 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d976f8577-5mqh2" Jan 17 11:59:57.283571 kubelet[2465]: E0117 11:59:57.283425 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d976f8577-5mqh2_calico-system(5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d976f8577-5mqh2_calico-system(5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d976f8577-5mqh2" podUID="5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5" Jan 17 11:59:57.296312 containerd[1442]: time="2025-01-17T11:59:57.296261292Z" level=error msg="Failed to destroy network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.296850 containerd[1442]: time="2025-01-17T11:59:57.296693049Z" level=error msg="encountered an error cleaning up failed sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.296850 containerd[1442]: time="2025-01-17T11:59:57.296759633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7bcf,Uid:074801cb-bd28-41a4-b464-ef5bfb657c08,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.297420 kubelet[2465]: E0117 11:59:57.297048 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:57.297420 kubelet[2465]: E0117 11:59:57.297102 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j7bcf" Jan 17 11:59:57.297420 kubelet[2465]: E0117 11:59:57.297119 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j7bcf" Jan 17 11:59:57.297581 kubelet[2465]: E0117 11:59:57.297157 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j7bcf_calico-system(074801cb-bd28-41a4-b464-ef5bfb657c08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j7bcf_calico-system(074801cb-bd28-41a4-b464-ef5bfb657c08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j7bcf" podUID="074801cb-bd28-41a4-b464-ef5bfb657c08" Jan 17 11:59:57.762269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0-shm.mount: Deactivated successfully. Jan 17 11:59:57.762352 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434-shm.mount: Deactivated successfully. Jan 17 11:59:57.762402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751-shm.mount: Deactivated successfully. Jan 17 11:59:57.762448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096-shm.mount: Deactivated successfully. Jan 17 11:59:57.762505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f-shm.mount: Deactivated successfully. Jan 17 11:59:58.234435 kubelet[2465]: I0117 11:59:58.232579 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 11:59:58.234435 kubelet[2465]: I0117 11:59:58.234369 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 11:59:58.235460 kubelet[2465]: I0117 11:59:58.235008 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 11:59:58.235491 containerd[1442]: time="2025-01-17T11:59:58.235021369Z" level=info msg="StopPodSandbox for \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\"" Jan 17 11:59:58.235491 containerd[1442]: time="2025-01-17T11:59:58.235169060Z" level=info msg="Ensure that sandbox addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751 in task-service has been cleanup successfully" Jan 17 11:59:58.235727 containerd[1442]: time="2025-01-17T11:59:58.235598169Z" level=info msg="StopPodSandbox for \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\"" Jan 17 11:59:58.235727 containerd[1442]: time="2025-01-17T11:59:58.235721372Z" level=info msg="Ensure that sandbox e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f in task-service has been cleanup successfully" Jan 17 11:59:58.241482 containerd[1442]: time="2025-01-17T11:59:58.241408306Z" level=info msg="StopPodSandbox for \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\"" Jan 17 11:59:58.241971 containerd[1442]: time="2025-01-17T11:59:58.241774433Z" level=info msg="Ensure that sandbox 1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096 in task-service has been cleanup successfully" Jan 17 11:59:58.242041 kubelet[2465]: I0117 11:59:58.241799 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 11:59:58.245294 containerd[1442]: time="2025-01-17T11:59:58.242290332Z" level=info msg="StopPodSandbox for \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\"" Jan 17 11:59:58.245294 containerd[1442]: time="2025-01-17T11:59:58.242440464Z" level=info msg="Ensure that sandbox b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12 in task-service has been cleanup successfully" Jan 17 11:59:58.245294 containerd[1442]: time="2025-01-17T11:59:58.245282010Z" level=info msg="StopPodSandbox for \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\"" Jan 17 11:59:58.246460 kubelet[2465]: I0117 11:59:58.244802 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 11:59:58.247475 containerd[1442]: time="2025-01-17T11:59:58.245418818Z" level=info msg="Ensure that sandbox 0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434 in task-service has been cleanup successfully" Jan 17 11:59:58.249047 kubelet[2465]: I0117 11:59:58.248847 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 11:59:58.250957 containerd[1442]: time="2025-01-17T11:59:58.250580049Z" level=info msg="StopPodSandbox for \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\"" Jan 17 11:59:58.250957 containerd[1442]: time="2025-01-17T11:59:58.250748067Z" level=info msg="Ensure that sandbox dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0 in task-service has been cleanup successfully" Jan 17 11:59:58.294497 containerd[1442]: time="2025-01-17T11:59:58.294440073Z" level=error msg="StopPodSandbox for \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\" failed" error="failed to destroy network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:58.294634 containerd[1442]: time="2025-01-17T11:59:58.294546790Z" level=error msg="StopPodSandbox for \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\" failed" error="failed to destroy network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:58.294809 kubelet[2465]: E0117 11:59:58.294766 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 11:59:58.294936 kubelet[2465]: E0117 11:59:58.294866 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f"} Jan 17 11:59:58.294977 kubelet[2465]: E0117 11:59:58.294815 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 11:59:58.295060 kubelet[2465]: E0117 11:59:58.294981 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096"} Jan 17 11:59:58.295091 kubelet[2465]: E0117 11:59:58.295069 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c20dd64-894e-4ff1-b1cd-c8495df31316\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 11:59:58.295151 kubelet[2465]: E0117 11:59:58.295092 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c20dd64-894e-4ff1-b1cd-c8495df31316\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d954f8cd6-mxjf9" podUID="6c20dd64-894e-4ff1-b1cd-c8495df31316" Jan 17 11:59:58.297257 kubelet[2465]: E0117 11:59:58.294951 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9990c74b-816e-4bf1-9470-a9d91243af45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 11:59:58.297375 kubelet[2465]: E0117 11:59:58.297264 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9990c74b-816e-4bf1-9470-a9d91243af45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2qxc4" podUID="9990c74b-816e-4bf1-9470-a9d91243af45" Jan 17 11:59:58.302723 containerd[1442]: time="2025-01-17T11:59:58.302674451Z" level=error msg="StopPodSandbox for \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\" failed" error="failed to destroy network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:58.302947 kubelet[2465]: E0117 11:59:58.302905 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 11:59:58.302997 kubelet[2465]: E0117 11:59:58.302954 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434"} Jan 17 11:59:58.302997 kubelet[2465]: E0117 11:59:58.302984 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 11:59:58.303069 kubelet[2465]: E0117 11:59:58.303002 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d976f8577-5mqh2" podUID="5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5" Jan 17 11:59:58.314985 containerd[1442]: time="2025-01-17T11:59:58.314938028Z" level=error msg="StopPodSandbox for \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\" failed" error="failed to destroy network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:58.315063 containerd[1442]: time="2025-01-17T11:59:58.314938108Z" level=error msg="StopPodSandbox for \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\" failed" error="failed to destroy network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:58.315198 kubelet[2465]: E0117 11:59:58.315155 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 11:59:58.315234 kubelet[2465]: E0117 11:59:58.315218 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0"} Jan 17 11:59:58.315268 kubelet[2465]: E0117 11:59:58.315253 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 11:59:58.315311 kubelet[2465]: E0117 11:59:58.315272 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d954f8cd6-tmq7s" podUID="8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5" Jan 17 11:59:58.315311 kubelet[2465]: E0117 11:59:58.315301 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 11:59:58.315382 kubelet[2465]: E0117 11:59:58.315316 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751"} Jan 17 11:59:58.315382 kubelet[2465]: E0117 11:59:58.315332 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf214c06-9b1d-4dac-a1ec-12f7f34d3261\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 11:59:58.315382 kubelet[2465]: E0117 11:59:58.315359 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf214c06-9b1d-4dac-a1ec-12f7f34d3261\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-28s6w" podUID="bf214c06-9b1d-4dac-a1ec-12f7f34d3261" Jan 17 11:59:58.317963 containerd[1442]: time="2025-01-17T11:59:58.317876287Z" level=error msg="StopPodSandbox for \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\" failed" error="failed to destroy network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 11:59:58.318814 kubelet[2465]: E0117 11:59:58.318765 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 11:59:58.318869 kubelet[2465]: E0117 11:59:58.318806 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12"} Jan 17 11:59:58.318869 kubelet[2465]: E0117 11:59:58.318847 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"074801cb-bd28-41a4-b464-ef5bfb657c08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 11:59:58.318942 kubelet[2465]: E0117 11:59:58.318867 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"074801cb-bd28-41a4-b464-ef5bfb657c08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j7bcf" podUID="074801cb-bd28-41a4-b464-ef5bfb657c08" Jan 17 12:00:01.389994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497390901.mount: Deactivated successfully. Jan 17 12:00:01.743496 containerd[1442]: time="2025-01-17T12:00:01.743252764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:01.745556 containerd[1442]: time="2025-01-17T12:00:01.745423267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 17 12:00:01.762108 containerd[1442]: time="2025-01-17T12:00:01.762038736Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:01.764047 containerd[1442]: time="2025-01-17T12:00:01.763998494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:01.764716 containerd[1442]: time="2025-01-17T12:00:01.764521013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.534154322s" Jan 17 12:00:01.764716 containerd[1442]: time="2025-01-17T12:00:01.764556024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 17 12:00:01.776121 containerd[1442]: time="2025-01-17T12:00:01.776071538Z" level=info msg="CreateContainer within sandbox \"cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:00:01.791074 containerd[1442]: time="2025-01-17T12:00:01.791021539Z" level=info msg="CreateContainer within sandbox \"cb7ed4c8586cdd5336536964ea11dae02cf773620f6e816cbecf1895dedcb8d3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"55668c320e446cf55159f1036a72c029fcd28226873573d13d381fa4ad0d012b\"" Jan 17 12:00:01.791589 containerd[1442]: time="2025-01-17T12:00:01.791563704Z" level=info msg="StartContainer for \"55668c320e446cf55159f1036a72c029fcd28226873573d13d381fa4ad0d012b\"" Jan 17 12:00:01.840321 systemd[1]: Started cri-containerd-55668c320e446cf55159f1036a72c029fcd28226873573d13d381fa4ad0d012b.scope - libcontainer container 55668c320e446cf55159f1036a72c029fcd28226873573d13d381fa4ad0d012b. Jan 17 12:00:01.864421 containerd[1442]: time="2025-01-17T12:00:01.864374999Z" level=info msg="StartContainer for \"55668c320e446cf55159f1036a72c029fcd28226873573d13d381fa4ad0d012b\" returns successfully" Jan 17 12:00:02.018237 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:00:02.018381 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:00:02.262062 kubelet[2465]: E0117 12:00:02.260650 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:02.277211 kubelet[2465]: I0117 12:00:02.277142 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zftkz" podStartSLOduration=1.010953865 podStartE2EDuration="13.277119258s" podCreationTimestamp="2025-01-17 11:59:49 +0000 UTC" firstStartedPulling="2025-01-17 11:59:49.498960445 +0000 UTC m=+14.429482257" lastFinishedPulling="2025-01-17 12:00:01.765125878 +0000 UTC m=+26.695647650" observedRunningTime="2025-01-17 12:00:02.276621753 +0000 UTC m=+27.207143565" watchObservedRunningTime="2025-01-17 12:00:02.277119258 +0000 UTC m=+27.207641030" Jan 17 12:00:03.260575 kubelet[2465]: I0117 12:00:03.260532 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:00:03.269069 kubelet[2465]: E0117 12:00:03.269027 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:07.777063 systemd[1]: Started sshd@7-10.0.0.32:22-10.0.0.1:37744.service - OpenSSH per-connection server daemon (10.0.0.1:37744). Jan 17 12:00:07.818363 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 37744 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:07.819730 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:07.823444 systemd-logind[1424]: New session 8 of user core. Jan 17 12:00:07.836331 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:00:07.988748 sshd[3816]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:07.991871 systemd[1]: sshd@7-10.0.0.32:22-10.0.0.1:37744.service: Deactivated successfully. Jan 17 12:00:07.993589 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:00:07.994200 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:00:07.995135 systemd-logind[1424]: Removed session 8. Jan 17 12:00:09.141351 containerd[1442]: time="2025-01-17T12:00:09.141018784Z" level=info msg="StopPodSandbox for \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\"" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.244 [INFO][3871] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.246 [INFO][3871] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" iface="eth0" netns="/var/run/netns/cni-dbca8819-a327-aada-0244-ac890b631e25" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.247 [INFO][3871] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" iface="eth0" netns="/var/run/netns/cni-dbca8819-a327-aada-0244-ac890b631e25" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.251 [INFO][3871] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" iface="eth0" netns="/var/run/netns/cni-dbca8819-a327-aada-0244-ac890b631e25" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.251 [INFO][3871] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.251 [INFO][3871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.313 [INFO][3879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.313 [INFO][3879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.313 [INFO][3879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.322 [WARNING][3879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.322 [INFO][3879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.323 [INFO][3879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:09.326777 containerd[1442]: 2025-01-17 12:00:09.325 [INFO][3871] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:09.326777 containerd[1442]: time="2025-01-17T12:00:09.331927535Z" level=info msg="TearDown network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\" successfully" Jan 17 12:00:09.326777 containerd[1442]: time="2025-01-17T12:00:09.331994990Z" level=info msg="StopPodSandbox for \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\" returns successfully" Jan 17 12:00:09.337830 containerd[1442]: time="2025-01-17T12:00:09.333817722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2qxc4,Uid:9990c74b-816e-4bf1-9470-a9d91243af45,Namespace:kube-system,Attempt:1,}" Jan 17 12:00:09.337856 kubelet[2465]: E0117 12:00:09.332354 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:09.342507 systemd[1]: run-netns-cni\x2ddbca8819\x2da327\x2daada\x2d0244\x2dac890b631e25.mount: Deactivated successfully. Jan 17 12:00:09.456788 systemd-networkd[1382]: cali9d73cbdcb43: Link UP Jan 17 12:00:09.457413 systemd-networkd[1382]: cali9d73cbdcb43: Gained carrier Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.367 [INFO][3889] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.380 [INFO][3889] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0 coredns-6f6b679f8f- kube-system 9990c74b-816e-4bf1-9470-a9d91243af45 789 0 2025-01-17 11:59:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-2qxc4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d73cbdcb43 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Namespace="kube-system" Pod="coredns-6f6b679f8f-2qxc4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2qxc4-" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.380 [INFO][3889] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Namespace="kube-system" Pod="coredns-6f6b679f8f-2qxc4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.407 [INFO][3902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" HandleID="k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.418 [INFO][3902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" HandleID="k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b910), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-2qxc4", "timestamp":"2025-01-17 12:00:09.407339324 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.418 [INFO][3902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.418 [INFO][3902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.418 [INFO][3902] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.421 [INFO][3902] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.426 [INFO][3902] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.429 [INFO][3902] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.431 [INFO][3902] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.433 [INFO][3902] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.433 [INFO][3902] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.436 [INFO][3902] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.440 [INFO][3902] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.444 [INFO][3902] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.444 [INFO][3902] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" host="localhost" Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.444 [INFO][3902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:09.477495 containerd[1442]: 2025-01-17 12:00:09.444 [INFO][3902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" HandleID="k8s-pod-network.ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.478018 containerd[1442]: 2025-01-17 12:00:09.446 [INFO][3889] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Namespace="kube-system" Pod="coredns-6f6b679f8f-2qxc4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9990c74b-816e-4bf1-9470-a9d91243af45", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-2qxc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d73cbdcb43", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:09.478018 containerd[1442]: 2025-01-17 12:00:09.446 [INFO][3889] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Namespace="kube-system" Pod="coredns-6f6b679f8f-2qxc4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.478018 containerd[1442]: 2025-01-17 12:00:09.446 [INFO][3889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d73cbdcb43 ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Namespace="kube-system" Pod="coredns-6f6b679f8f-2qxc4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.478018 containerd[1442]: 2025-01-17 12:00:09.458 [INFO][3889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Namespace="kube-system" Pod="coredns-6f6b679f8f-2qxc4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.478018 containerd[1442]: 2025-01-17 12:00:09.461 [INFO][3889] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Namespace="kube-system" Pod="coredns-6f6b679f8f-2qxc4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9990c74b-816e-4bf1-9470-a9d91243af45", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f", Pod:"coredns-6f6b679f8f-2qxc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d73cbdcb43", MAC:"56:bd:ae:1f:18:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:09.478018 containerd[1442]: 2025-01-17 12:00:09.475 [INFO][3889] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f" Namespace="kube-system" Pod="coredns-6f6b679f8f-2qxc4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:09.531750 containerd[1442]: time="2025-01-17T12:00:09.531261388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:09.531750 containerd[1442]: time="2025-01-17T12:00:09.531332644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:09.531750 containerd[1442]: time="2025-01-17T12:00:09.531344407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:09.531750 containerd[1442]: time="2025-01-17T12:00:09.531431906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:09.559358 systemd[1]: Started cri-containerd-ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f.scope - libcontainer container ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f. Jan 17 12:00:09.569116 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:00:09.584492 containerd[1442]: time="2025-01-17T12:00:09.584455720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2qxc4,Uid:9990c74b-816e-4bf1-9470-a9d91243af45,Namespace:kube-system,Attempt:1,} returns sandbox id \"ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f\"" Jan 17 12:00:09.585474 kubelet[2465]: E0117 12:00:09.585251 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:09.588051 containerd[1442]: time="2025-01-17T12:00:09.587998120Z" level=info msg="CreateContainer within sandbox \"ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:00:09.601121 containerd[1442]: time="2025-01-17T12:00:09.601085155Z" level=info msg="CreateContainer within sandbox \"ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1aef62082a994317d12c4687176a786a55ee5ed816271e921a2c7a283627da51\"" Jan 17 12:00:09.602513 containerd[1442]: time="2025-01-17T12:00:09.601780832Z" level=info msg="StartContainer for \"1aef62082a994317d12c4687176a786a55ee5ed816271e921a2c7a283627da51\"" Jan 17 12:00:09.628354 systemd[1]: Started cri-containerd-1aef62082a994317d12c4687176a786a55ee5ed816271e921a2c7a283627da51.scope - libcontainer container 1aef62082a994317d12c4687176a786a55ee5ed816271e921a2c7a283627da51. Jan 17 12:00:09.659678 containerd[1442]: time="2025-01-17T12:00:09.659620254Z" level=info msg="StartContainer for \"1aef62082a994317d12c4687176a786a55ee5ed816271e921a2c7a283627da51\" returns successfully" Jan 17 12:00:10.140338 containerd[1442]: time="2025-01-17T12:00:10.140222282Z" level=info msg="StopPodSandbox for \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\"" Jan 17 12:00:10.140338 containerd[1442]: time="2025-01-17T12:00:10.140227403Z" level=info msg="StopPodSandbox for \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\"" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.196 [INFO][4062] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.196 [INFO][4062] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" iface="eth0" netns="/var/run/netns/cni-490b0641-44eb-c652-2797-85fcbc6ef775" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.196 [INFO][4062] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" iface="eth0" netns="/var/run/netns/cni-490b0641-44eb-c652-2797-85fcbc6ef775" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.197 [INFO][4062] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" iface="eth0" netns="/var/run/netns/cni-490b0641-44eb-c652-2797-85fcbc6ef775" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.197 [INFO][4062] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.197 [INFO][4062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.227 [INFO][4077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.227 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.227 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.235 [WARNING][4077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.235 [INFO][4077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.236 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:10.240297 containerd[1442]: 2025-01-17 12:00:10.237 [INFO][4062] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:10.241673 containerd[1442]: time="2025-01-17T12:00:10.240417133Z" level=info msg="TearDown network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\" successfully" Jan 17 12:00:10.241673 containerd[1442]: time="2025-01-17T12:00:10.240443499Z" level=info msg="StopPodSandbox for \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\" returns successfully" Jan 17 12:00:10.241673 containerd[1442]: time="2025-01-17T12:00:10.241082559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d954f8cd6-tmq7s,Uid:8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.193 [INFO][4061] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.194 [INFO][4061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" iface="eth0" netns="/var/run/netns/cni-abe52c4e-46ce-e1df-cea0-2e371445004d" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.195 [INFO][4061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" iface="eth0" netns="/var/run/netns/cni-abe52c4e-46ce-e1df-cea0-2e371445004d" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.197 [INFO][4061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" iface="eth0" netns="/var/run/netns/cni-abe52c4e-46ce-e1df-cea0-2e371445004d" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.197 [INFO][4061] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.197 [INFO][4061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.227 [INFO][4078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.227 [INFO][4078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.236 [INFO][4078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.245 [WARNING][4078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.245 [INFO][4078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.247 [INFO][4078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:10.250016 containerd[1442]: 2025-01-17 12:00:10.248 [INFO][4061] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:10.250467 containerd[1442]: time="2025-01-17T12:00:10.250134016Z" level=info msg="TearDown network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\" successfully" Jan 17 12:00:10.250467 containerd[1442]: time="2025-01-17T12:00:10.250156701Z" level=info msg="StopPodSandbox for \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\" returns successfully" Jan 17 12:00:10.250826 containerd[1442]: time="2025-01-17T12:00:10.250794681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d954f8cd6-mxjf9,Uid:6c20dd64-894e-4ff1-b1cd-c8495df31316,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:00:10.287478 kubelet[2465]: E0117 12:00:10.287443 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:10.302785 kubelet[2465]: I0117 12:00:10.301707 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2qxc4" podStartSLOduration=29.301658152999998 podStartE2EDuration="29.301658153s" podCreationTimestamp="2025-01-17 11:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:00:10.30017787 +0000 UTC m=+35.230699802" watchObservedRunningTime="2025-01-17 12:00:10.301658153 +0000 UTC m=+35.232179965" Jan 17 12:00:10.337366 systemd[1]: run-netns-cni\x2d490b0641\x2d44eb\x2dc652\x2d2797\x2d85fcbc6ef775.mount: Deactivated successfully. Jan 17 12:00:10.337696 systemd[1]: run-netns-cni\x2dabe52c4e\x2d46ce\x2de1df\x2dcea0\x2d2e371445004d.mount: Deactivated successfully. Jan 17 12:00:10.421292 systemd-networkd[1382]: cali0f95db42209: Link UP Jan 17 12:00:10.421493 systemd-networkd[1382]: cali0f95db42209: Gained carrier Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.323 [INFO][4099] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.341 [INFO][4099] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0 calico-apiserver-7d954f8cd6- calico-apiserver 6c20dd64-894e-4ff1-b1cd-c8495df31316 808 0 2025-01-17 11:59:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d954f8cd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d954f8cd6-mxjf9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f95db42209 [] []}} ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-mxjf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.342 [INFO][4099] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-mxjf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.371 [INFO][4125] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" HandleID="k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.382 [INFO][4125] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" HandleID="k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027a0b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d954f8cd6-mxjf9", "timestamp":"2025-01-17 12:00:10.37130177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.383 [INFO][4125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.383 [INFO][4125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.383 [INFO][4125] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.389 [INFO][4125] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.393 [INFO][4125] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.397 [INFO][4125] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.404 [INFO][4125] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.407 [INFO][4125] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.407 [INFO][4125] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.408 [INFO][4125] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.412 [INFO][4125] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.416 [INFO][4125] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.416 [INFO][4125] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" host="localhost" Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.416 [INFO][4125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:10.432353 containerd[1442]: 2025-01-17 12:00:10.416 [INFO][4125] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" HandleID="k8s-pod-network.f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.432918 containerd[1442]: 2025-01-17 12:00:10.419 [INFO][4099] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-mxjf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0", GenerateName:"calico-apiserver-7d954f8cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c20dd64-894e-4ff1-b1cd-c8495df31316", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d954f8cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d954f8cd6-mxjf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f95db42209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:10.432918 containerd[1442]: 2025-01-17 12:00:10.419 [INFO][4099] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-mxjf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.432918 containerd[1442]: 2025-01-17 12:00:10.419 [INFO][4099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f95db42209 ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-mxjf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.432918 containerd[1442]: 2025-01-17 12:00:10.421 [INFO][4099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-mxjf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.432918 containerd[1442]: 2025-01-17 12:00:10.421 [INFO][4099] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-mxjf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0", GenerateName:"calico-apiserver-7d954f8cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c20dd64-894e-4ff1-b1cd-c8495df31316", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d954f8cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f", Pod:"calico-apiserver-7d954f8cd6-mxjf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f95db42209", MAC:"3e:78:04:d4:82:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:10.432918 containerd[1442]: 2025-01-17 12:00:10.430 [INFO][4099] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-mxjf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:10.448578 containerd[1442]: time="2025-01-17T12:00:10.448299792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:10.448578 containerd[1442]: time="2025-01-17T12:00:10.448360486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:10.448578 containerd[1442]: time="2025-01-17T12:00:10.448386371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:10.448578 containerd[1442]: time="2025-01-17T12:00:10.448476951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:10.472375 systemd[1]: Started cri-containerd-f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f.scope - libcontainer container f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f. Jan 17 12:00:10.482658 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:00:10.502414 containerd[1442]: time="2025-01-17T12:00:10.502371446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d954f8cd6-mxjf9,Uid:6c20dd64-894e-4ff1-b1cd-c8495df31316,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f\"" Jan 17 12:00:10.521348 systemd-networkd[1382]: calia37daebb0c4: Link UP Jan 17 12:00:10.522902 systemd-networkd[1382]: calia37daebb0c4: Gained carrier Jan 17 12:00:10.524032 containerd[1442]: time="2025-01-17T12:00:10.523998131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.326 [INFO][4093] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.344 [INFO][4093] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0 calico-apiserver-7d954f8cd6- calico-apiserver 8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5 809 0 2025-01-17 11:59:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d954f8cd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d954f8cd6-tmq7s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia37daebb0c4 [] []}} ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-tmq7s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.345 [INFO][4093] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-tmq7s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.382 [INFO][4124] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" HandleID="k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.393 [INFO][4124] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" HandleID="k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ab030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d954f8cd6-tmq7s", "timestamp":"2025-01-17 12:00:10.382900584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.393 [INFO][4124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.417 [INFO][4124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.417 [INFO][4124] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.491 [INFO][4124] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.497 [INFO][4124] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.502 [INFO][4124] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.503 [INFO][4124] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.506 [INFO][4124] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.506 [INFO][4124] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.508 [INFO][4124] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.511 [INFO][4124] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.516 [INFO][4124] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.516 [INFO][4124] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" host="localhost" Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.516 [INFO][4124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:10.538814 containerd[1442]: 2025-01-17 12:00:10.516 [INFO][4124] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" HandleID="k8s-pod-network.eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.539425 containerd[1442]: 2025-01-17 12:00:10.518 [INFO][4093] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-tmq7s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0", GenerateName:"calico-apiserver-7d954f8cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d954f8cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d954f8cd6-tmq7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia37daebb0c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:10.539425 containerd[1442]: 2025-01-17 12:00:10.518 [INFO][4093] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-tmq7s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.539425 containerd[1442]: 2025-01-17 12:00:10.518 [INFO][4093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia37daebb0c4 ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-tmq7s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.539425 containerd[1442]: 2025-01-17 12:00:10.525 [INFO][4093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-tmq7s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.539425 containerd[1442]: 2025-01-17 12:00:10.527 [INFO][4093] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-tmq7s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0", GenerateName:"calico-apiserver-7d954f8cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d954f8cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b", Pod:"calico-apiserver-7d954f8cd6-tmq7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia37daebb0c4", MAC:"de:d9:85:2d:64:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:10.539425 containerd[1442]: 2025-01-17 12:00:10.536 [INFO][4093] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b" Namespace="calico-apiserver" Pod="calico-apiserver-7d954f8cd6-tmq7s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:10.554930 containerd[1442]: time="2025-01-17T12:00:10.554598817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:10.554930 containerd[1442]: time="2025-01-17T12:00:10.554653269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:10.554930 containerd[1442]: time="2025-01-17T12:00:10.554668032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:10.554930 containerd[1442]: time="2025-01-17T12:00:10.554757212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:10.578431 systemd[1]: Started cri-containerd-eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b.scope - libcontainer container eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b. Jan 17 12:00:10.590802 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:00:10.607774 containerd[1442]: time="2025-01-17T12:00:10.607730345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d954f8cd6-tmq7s,Uid:8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b\"" Jan 17 12:00:11.140379 containerd[1442]: time="2025-01-17T12:00:11.140178816Z" level=info msg="StopPodSandbox for \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\"" Jan 17 12:00:11.183389 systemd-networkd[1382]: cali9d73cbdcb43: Gained IPv6LL Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.181 [INFO][4280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.182 [INFO][4280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" iface="eth0" netns="/var/run/netns/cni-76d9d654-4f8d-a18d-9ea1-be596485639f" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.182 [INFO][4280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" iface="eth0" netns="/var/run/netns/cni-76d9d654-4f8d-a18d-9ea1-be596485639f" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.182 [INFO][4280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" iface="eth0" netns="/var/run/netns/cni-76d9d654-4f8d-a18d-9ea1-be596485639f" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.182 [INFO][4280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.182 [INFO][4280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.202 [INFO][4288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.202 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.202 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.211 [WARNING][4288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.211 [INFO][4288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.215 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:11.220913 containerd[1442]: 2025-01-17 12:00:11.217 [INFO][4280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:11.223234 containerd[1442]: time="2025-01-17T12:00:11.221000759Z" level=info msg="TearDown network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\" successfully" Jan 17 12:00:11.223234 containerd[1442]: time="2025-01-17T12:00:11.221026244Z" level=info msg="StopPodSandbox for \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\" returns successfully" Jan 17 12:00:11.223498 containerd[1442]: time="2025-01-17T12:00:11.223465880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d976f8577-5mqh2,Uid:5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5,Namespace:calico-system,Attempt:1,}" Jan 17 12:00:11.292930 kubelet[2465]: E0117 12:00:11.292896 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:11.335746 systemd[1]: run-netns-cni\x2d76d9d654\x2d4f8d\x2da18d\x2d9ea1\x2dbe596485639f.mount: Deactivated successfully. Jan 17 12:00:11.341573 systemd-networkd[1382]: calia32072bdda3: Link UP Jan 17 12:00:11.342248 systemd-networkd[1382]: calia32072bdda3: Gained carrier Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.255 [INFO][4296] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.269 [INFO][4296] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0 calico-kube-controllers-5d976f8577- calico-system 5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5 835 0 2025-01-17 11:59:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d976f8577 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d976f8577-5mqh2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia32072bdda3 [] []}} ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Namespace="calico-system" Pod="calico-kube-controllers-5d976f8577-5mqh2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.269 [INFO][4296] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Namespace="calico-system" Pod="calico-kube-controllers-5d976f8577-5mqh2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.301 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" HandleID="k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.311 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" HandleID="k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003207c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d976f8577-5mqh2", "timestamp":"2025-01-17 12:00:11.30138989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.311 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.311 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.311 [INFO][4309] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.313 [INFO][4309] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.317 [INFO][4309] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.321 [INFO][4309] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.323 [INFO][4309] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.325 [INFO][4309] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.325 [INFO][4309] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.327 [INFO][4309] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6 Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.331 [INFO][4309] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.337 [INFO][4309] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.337 [INFO][4309] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" host="localhost" Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.337 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:11.355844 containerd[1442]: 2025-01-17 12:00:11.337 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" HandleID="k8s-pod-network.f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.356713 containerd[1442]: 2025-01-17 12:00:11.339 [INFO][4296] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Namespace="calico-system" Pod="calico-kube-controllers-5d976f8577-5mqh2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0", GenerateName:"calico-kube-controllers-5d976f8577-", Namespace:"calico-system", SelfLink:"", UID:"5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d976f8577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d976f8577-5mqh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia32072bdda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:11.356713 containerd[1442]: 2025-01-17 12:00:11.339 [INFO][4296] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Namespace="calico-system" Pod="calico-kube-controllers-5d976f8577-5mqh2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.356713 containerd[1442]: 2025-01-17 12:00:11.339 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia32072bdda3 ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Namespace="calico-system" Pod="calico-kube-controllers-5d976f8577-5mqh2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.356713 containerd[1442]: 2025-01-17 12:00:11.341 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Namespace="calico-system" Pod="calico-kube-controllers-5d976f8577-5mqh2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.356713 containerd[1442]: 2025-01-17 12:00:11.341 [INFO][4296] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Namespace="calico-system" Pod="calico-kube-controllers-5d976f8577-5mqh2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0", GenerateName:"calico-kube-controllers-5d976f8577-", Namespace:"calico-system", SelfLink:"", UID:"5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d976f8577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6", Pod:"calico-kube-controllers-5d976f8577-5mqh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia32072bdda3", MAC:"6a:b9:b3:7d:3d:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:11.356713 containerd[1442]: 2025-01-17 12:00:11.354 [INFO][4296] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6" Namespace="calico-system" Pod="calico-kube-controllers-5d976f8577-5mqh2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:11.384075 containerd[1442]: time="2025-01-17T12:00:11.383967044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:11.384075 containerd[1442]: time="2025-01-17T12:00:11.384038899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:11.384075 containerd[1442]: time="2025-01-17T12:00:11.384054823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:11.384303 containerd[1442]: time="2025-01-17T12:00:11.384132759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:11.420381 systemd[1]: Started cri-containerd-f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6.scope - libcontainer container f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6. Jan 17 12:00:11.431253 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:00:11.446151 containerd[1442]: time="2025-01-17T12:00:11.446092830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d976f8577-5mqh2,Uid:5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5,Namespace:calico-system,Attempt:1,} returns sandbox id \"f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6\"" Jan 17 12:00:11.630401 systemd-networkd[1382]: calia37daebb0c4: Gained IPv6LL Jan 17 12:00:12.140248 containerd[1442]: time="2025-01-17T12:00:12.139963182Z" level=info msg="StopPodSandbox for \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\"" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.180 [INFO][4410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.180 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" iface="eth0" netns="/var/run/netns/cni-15f93d7f-aa1c-0ee4-93b4-bf1e61cb476d" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.180 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" iface="eth0" netns="/var/run/netns/cni-15f93d7f-aa1c-0ee4-93b4-bf1e61cb476d" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.180 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" iface="eth0" netns="/var/run/netns/cni-15f93d7f-aa1c-0ee4-93b4-bf1e61cb476d" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.180 [INFO][4410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.180 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.199 [INFO][4417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.199 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.199 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.207 [WARNING][4417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.207 [INFO][4417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.208 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:12.211634 containerd[1442]: 2025-01-17 12:00:12.210 [INFO][4410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:12.212492 containerd[1442]: time="2025-01-17T12:00:12.212335910Z" level=info msg="TearDown network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\" successfully" Jan 17 12:00:12.212492 containerd[1442]: time="2025-01-17T12:00:12.212369797Z" level=info msg="StopPodSandbox for \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\" returns successfully" Jan 17 12:00:12.213225 kubelet[2465]: E0117 12:00:12.212782 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:12.213827 containerd[1442]: time="2025-01-17T12:00:12.213496709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-28s6w,Uid:bf214c06-9b1d-4dac-a1ec-12f7f34d3261,Namespace:kube-system,Attempt:1,}" Jan 17 12:00:12.214376 systemd[1]: run-netns-cni\x2d15f93d7f\x2daa1c\x2d0ee4\x2d93b4\x2dbf1e61cb476d.mount: Deactivated successfully. Jan 17 12:00:12.302213 kubelet[2465]: E0117 12:00:12.301965 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:12.326825 systemd-networkd[1382]: calife70c094590: Link UP Jan 17 12:00:12.327157 systemd-networkd[1382]: calife70c094590: Gained carrier Jan 17 12:00:12.337628 systemd-networkd[1382]: cali0f95db42209: Gained IPv6LL Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.245 [INFO][4424] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.256 [INFO][4424] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--28s6w-eth0 coredns-6f6b679f8f- kube-system bf214c06-9b1d-4dac-a1ec-12f7f34d3261 849 0 2025-01-17 11:59:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-28s6w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife70c094590 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Namespace="kube-system" Pod="coredns-6f6b679f8f-28s6w" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--28s6w-" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.256 [INFO][4424] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Namespace="kube-system" Pod="coredns-6f6b679f8f-28s6w" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.281 [INFO][4438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" HandleID="k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.291 [INFO][4438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" HandleID="k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f38a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-28s6w", "timestamp":"2025-01-17 12:00:12.281225684 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.291 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.291 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.291 [INFO][4438] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.293 [INFO][4438] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.297 [INFO][4438] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.307 [INFO][4438] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.308 [INFO][4438] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.312 [INFO][4438] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.312 [INFO][4438] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.313 [INFO][4438] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829 Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.316 [INFO][4438] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.322 [INFO][4438] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.322 [INFO][4438] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" host="localhost" Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.322 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:12.341717 containerd[1442]: 2025-01-17 12:00:12.322 [INFO][4438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" HandleID="k8s-pod-network.e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.342245 containerd[1442]: 2025-01-17 12:00:12.325 [INFO][4424] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Namespace="kube-system" Pod="coredns-6f6b679f8f-28s6w" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--28s6w-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf214c06-9b1d-4dac-a1ec-12f7f34d3261", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-28s6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife70c094590", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:12.342245 containerd[1442]: 2025-01-17 12:00:12.325 [INFO][4424] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Namespace="kube-system" Pod="coredns-6f6b679f8f-28s6w" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.342245 containerd[1442]: 2025-01-17 12:00:12.325 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife70c094590 ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Namespace="kube-system" Pod="coredns-6f6b679f8f-28s6w" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.342245 containerd[1442]: 2025-01-17 12:00:12.327 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Namespace="kube-system" Pod="coredns-6f6b679f8f-28s6w" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.342245 containerd[1442]: 2025-01-17 12:00:12.327 [INFO][4424] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Namespace="kube-system" Pod="coredns-6f6b679f8f-28s6w" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--28s6w-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf214c06-9b1d-4dac-a1ec-12f7f34d3261", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829", Pod:"coredns-6f6b679f8f-28s6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife70c094590", MAC:"fa:84:68:42:ae:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:12.342245 containerd[1442]: 2025-01-17 12:00:12.340 [INFO][4424] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829" Namespace="kube-system" Pod="coredns-6f6b679f8f-28s6w" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:12.357537 containerd[1442]: time="2025-01-17T12:00:12.357180707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:12.357537 containerd[1442]: time="2025-01-17T12:00:12.357475808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:12.357537 containerd[1442]: time="2025-01-17T12:00:12.357507855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:12.357963 containerd[1442]: time="2025-01-17T12:00:12.357593832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:12.379373 systemd[1]: Started cri-containerd-e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829.scope - libcontainer container e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829. Jan 17 12:00:12.389634 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:00:12.426654 containerd[1442]: time="2025-01-17T12:00:12.426613353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-28s6w,Uid:bf214c06-9b1d-4dac-a1ec-12f7f34d3261,Namespace:kube-system,Attempt:1,} returns sandbox id \"e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829\"" Jan 17 12:00:12.427304 kubelet[2465]: E0117 12:00:12.427285 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:12.428936 containerd[1442]: time="2025-01-17T12:00:12.428906543Z" level=info msg="CreateContainer within sandbox \"e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:00:12.441516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678207292.mount: Deactivated successfully. Jan 17 12:00:12.442549 containerd[1442]: time="2025-01-17T12:00:12.442449321Z" level=info msg="CreateContainer within sandbox \"e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23a1d2bb5a6d39b3d28244754af3e3ef7f70320e5eaa2c4c67df6b0e4a257887\"" Jan 17 12:00:12.442941 containerd[1442]: time="2025-01-17T12:00:12.442916457Z" level=info msg="StartContainer for \"23a1d2bb5a6d39b3d28244754af3e3ef7f70320e5eaa2c4c67df6b0e4a257887\"" Jan 17 12:00:12.464335 systemd[1]: Started cri-containerd-23a1d2bb5a6d39b3d28244754af3e3ef7f70320e5eaa2c4c67df6b0e4a257887.scope - libcontainer container 23a1d2bb5a6d39b3d28244754af3e3ef7f70320e5eaa2c4c67df6b0e4a257887. Jan 17 12:00:12.485846 containerd[1442]: time="2025-01-17T12:00:12.485738163Z" level=info msg="StartContainer for \"23a1d2bb5a6d39b3d28244754af3e3ef7f70320e5eaa2c4c67df6b0e4a257887\" returns successfully" Jan 17 12:00:13.009157 systemd[1]: Started sshd@8-10.0.0.32:22-10.0.0.1:43004.service - OpenSSH per-connection server daemon (10.0.0.1:43004). Jan 17 12:00:13.051807 sshd[4565]: Accepted publickey for core from 10.0.0.1 port 43004 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:13.053324 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:13.057238 systemd-logind[1424]: New session 9 of user core. Jan 17 12:00:13.068450 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:00:13.141660 containerd[1442]: time="2025-01-17T12:00:13.141319732Z" level=info msg="StopPodSandbox for \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\"" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.185 [INFO][4593] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.185 [INFO][4593] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" iface="eth0" netns="/var/run/netns/cni-0ba3b1db-75b0-7b40-2dae-6a732481b320" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.185 [INFO][4593] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" iface="eth0" netns="/var/run/netns/cni-0ba3b1db-75b0-7b40-2dae-6a732481b320" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.185 [INFO][4593] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" iface="eth0" netns="/var/run/netns/cni-0ba3b1db-75b0-7b40-2dae-6a732481b320" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.185 [INFO][4593] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.185 [INFO][4593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.211 [INFO][4601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.212 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.212 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.224 [WARNING][4601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.224 [INFO][4601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.226 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:13.230950 containerd[1442]: 2025-01-17 12:00:13.228 [INFO][4593] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:13.231630 containerd[1442]: time="2025-01-17T12:00:13.231512051Z" level=info msg="TearDown network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\" successfully" Jan 17 12:00:13.231630 containerd[1442]: time="2025-01-17T12:00:13.231540897Z" level=info msg="StopPodSandbox for \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\" returns successfully" Jan 17 12:00:13.232238 containerd[1442]: time="2025-01-17T12:00:13.232173303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7bcf,Uid:074801cb-bd28-41a4-b464-ef5bfb657c08,Namespace:calico-system,Attempt:1,}" Jan 17 12:00:13.257411 sshd[4565]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:13.261248 systemd[1]: sshd@8-10.0.0.32:22-10.0.0.1:43004.service: Deactivated successfully. Jan 17 12:00:13.264661 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:00:13.266433 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:00:13.267474 systemd-logind[1424]: Removed session 9. Jan 17 12:00:13.306716 kubelet[2465]: E0117 12:00:13.306674 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:13.320333 kubelet[2465]: I0117 12:00:13.319015 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-28s6w" podStartSLOduration=32.318999872 podStartE2EDuration="32.318999872s" podCreationTimestamp="2025-01-17 11:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:00:13.318300693 +0000 UTC m=+38.248822505" watchObservedRunningTime="2025-01-17 12:00:13.318999872 +0000 UTC m=+38.249521724" Jan 17 12:00:13.340659 systemd[1]: run-netns-cni\x2d0ba3b1db\x2d75b0\x2d7b40\x2d2dae\x2d6a732481b320.mount: Deactivated successfully. Jan 17 12:00:13.360272 systemd-networkd[1382]: calia32072bdda3: Gained IPv6LL Jan 17 12:00:13.439707 systemd-networkd[1382]: calide1479d73c4: Link UP Jan 17 12:00:13.439870 systemd-networkd[1382]: calide1479d73c4: Gained carrier Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.260 [INFO][4609] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.273 [INFO][4609] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--j7bcf-eth0 csi-node-driver- calico-system 074801cb-bd28-41a4-b464-ef5bfb657c08 864 0 2025-01-17 11:59:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-j7bcf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calide1479d73c4 [] []}} ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Namespace="calico-system" Pod="csi-node-driver-j7bcf" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7bcf-" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.273 [INFO][4609] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Namespace="calico-system" Pod="csi-node-driver-j7bcf" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.302 [INFO][4624] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" HandleID="k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.316 [INFO][4624] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" HandleID="k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000288980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-j7bcf", "timestamp":"2025-01-17 12:00:13.30286654 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.316 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.316 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.316 [INFO][4624] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.324 [INFO][4624] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.415 [INFO][4624] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.420 [INFO][4624] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.421 [INFO][4624] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.424 [INFO][4624] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.424 [INFO][4624] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.425 [INFO][4624] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.428 [INFO][4624] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.434 [INFO][4624] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.434 [INFO][4624] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" host="localhost" Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.434 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:13.451482 containerd[1442]: 2025-01-17 12:00:13.434 [INFO][4624] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" HandleID="k8s-pod-network.0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.452270 containerd[1442]: 2025-01-17 12:00:13.436 [INFO][4609] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Namespace="calico-system" Pod="csi-node-driver-j7bcf" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7bcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j7bcf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"074801cb-bd28-41a4-b464-ef5bfb657c08", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-j7bcf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide1479d73c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:13.452270 containerd[1442]: 2025-01-17 12:00:13.436 [INFO][4609] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Namespace="calico-system" Pod="csi-node-driver-j7bcf" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.452270 containerd[1442]: 2025-01-17 12:00:13.436 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide1479d73c4 ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Namespace="calico-system" Pod="csi-node-driver-j7bcf" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.452270 containerd[1442]: 2025-01-17 12:00:13.438 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Namespace="calico-system" Pod="csi-node-driver-j7bcf" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.452270 containerd[1442]: 2025-01-17 12:00:13.439 [INFO][4609] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Namespace="calico-system" Pod="csi-node-driver-j7bcf" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7bcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j7bcf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"074801cb-bd28-41a4-b464-ef5bfb657c08", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f", Pod:"csi-node-driver-j7bcf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide1479d73c4", MAC:"fa:40:ad:1f:22:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:13.452270 containerd[1442]: 2025-01-17 12:00:13.449 [INFO][4609] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f" Namespace="calico-system" Pod="csi-node-driver-j7bcf" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:13.468980 containerd[1442]: time="2025-01-17T12:00:13.468873875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:13.468980 containerd[1442]: time="2025-01-17T12:00:13.468954011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:13.468980 containerd[1442]: time="2025-01-17T12:00:13.468966814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:13.469272 containerd[1442]: time="2025-01-17T12:00:13.469063233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:13.486111 kubelet[2465]: I0117 12:00:13.485897 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:00:13.486402 kubelet[2465]: E0117 12:00:13.486323 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:13.489424 systemd[1]: Started cri-containerd-0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f.scope - libcontainer container 0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f. Jan 17 12:00:13.500782 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:00:13.511560 containerd[1442]: time="2025-01-17T12:00:13.511464636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7bcf,Uid:074801cb-bd28-41a4-b464-ef5bfb657c08,Namespace:calico-system,Attempt:1,} returns sandbox id \"0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f\"" Jan 17 12:00:13.871429 systemd-networkd[1382]: calife70c094590: Gained IPv6LL Jan 17 12:00:14.312894 kubelet[2465]: E0117 12:00:14.312867 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:15.314785 kubelet[2465]: E0117 12:00:15.314746 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:15.470388 systemd-networkd[1382]: calide1479d73c4: Gained IPv6LL Jan 17 12:00:16.390373 kubelet[2465]: I0117 12:00:16.389979 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:00:16.390373 kubelet[2465]: E0117 12:00:16.390373 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:16.768247 kernel: bpftool[4827]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:00:16.937822 systemd-networkd[1382]: vxlan.calico: Link UP Jan 17 12:00:16.937834 systemd-networkd[1382]: vxlan.calico: Gained carrier Jan 17 12:00:17.322146 kubelet[2465]: E0117 12:00:17.322116 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:18.271830 systemd[1]: Started sshd@9-10.0.0.32:22-10.0.0.1:43010.service - OpenSSH per-connection server daemon (10.0.0.1:43010). Jan 17 12:00:18.320647 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 43010 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:18.322143 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:18.326997 systemd-logind[1424]: New session 10 of user core. Jan 17 12:00:18.336662 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:00:18.518303 sshd[4947]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:18.521734 systemd[1]: sshd@9-10.0.0.32:22-10.0.0.1:43010.service: Deactivated successfully. Jan 17 12:00:18.524888 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:00:18.525775 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:00:18.527068 systemd-logind[1424]: Removed session 10. Jan 17 12:00:18.670478 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Jan 17 12:00:23.532738 systemd[1]: Started sshd@10-10.0.0.32:22-10.0.0.1:55250.service - OpenSSH per-connection server daemon (10.0.0.1:55250). Jan 17 12:00:23.568022 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 55250 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:23.569328 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:23.573010 systemd-logind[1424]: New session 11 of user core. Jan 17 12:00:23.579340 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:00:23.725013 sshd[4972]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:23.728265 systemd[1]: sshd@10-10.0.0.32:22-10.0.0.1:55250.service: Deactivated successfully. Jan 17 12:00:23.730017 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:00:23.730662 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:00:23.731527 systemd-logind[1424]: Removed session 11. Jan 17 12:00:26.989144 containerd[1442]: time="2025-01-17T12:00:26.989098528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:26.990161 containerd[1442]: time="2025-01-17T12:00:26.989961056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 17 12:00:26.991707 containerd[1442]: time="2025-01-17T12:00:26.990931079Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:26.993490 containerd[1442]: time="2025-01-17T12:00:26.993449011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:26.994120 containerd[1442]: time="2025-01-17T12:00:26.994081824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 16.469910255s" Jan 17 12:00:26.994120 containerd[1442]: time="2025-01-17T12:00:26.994115869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:00:26.995268 containerd[1442]: time="2025-01-17T12:00:26.995021203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:00:26.997149 containerd[1442]: time="2025-01-17T12:00:26.997037621Z" level=info msg="CreateContainer within sandbox \"f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:00:27.006941 containerd[1442]: time="2025-01-17T12:00:27.006907104Z" level=info msg="CreateContainer within sandbox \"f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"55b554ca5ff6c4e920c3e898079b7760504b9b56f678b4ee8c06590d9745fcb6\"" Jan 17 12:00:27.008208 containerd[1442]: time="2025-01-17T12:00:27.007447862Z" level=info msg="StartContainer for \"55b554ca5ff6c4e920c3e898079b7760504b9b56f678b4ee8c06590d9745fcb6\"" Jan 17 12:00:27.056388 systemd[1]: Started cri-containerd-55b554ca5ff6c4e920c3e898079b7760504b9b56f678b4ee8c06590d9745fcb6.scope - libcontainer container 55b554ca5ff6c4e920c3e898079b7760504b9b56f678b4ee8c06590d9745fcb6. Jan 17 12:00:27.093179 containerd[1442]: time="2025-01-17T12:00:27.093121302Z" level=info msg="StartContainer for \"55b554ca5ff6c4e920c3e898079b7760504b9b56f678b4ee8c06590d9745fcb6\" returns successfully" Jan 17 12:00:27.223828 containerd[1442]: time="2025-01-17T12:00:27.223111737Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:27.224159 containerd[1442]: time="2025-01-17T12:00:27.224116083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:00:27.226170 containerd[1442]: time="2025-01-17T12:00:27.226124015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 231.069807ms" Jan 17 12:00:27.226170 containerd[1442]: time="2025-01-17T12:00:27.226167941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:00:27.227211 containerd[1442]: time="2025-01-17T12:00:27.227117679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:00:27.228160 containerd[1442]: time="2025-01-17T12:00:27.228131226Z" level=info msg="CreateContainer within sandbox \"eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:00:27.243728 containerd[1442]: time="2025-01-17T12:00:27.243624796Z" level=info msg="CreateContainer within sandbox \"eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"217f9139e02f8cbfda960fbf2cf3a31ff07b8261e1b34d2998d6563fa1b6629c\"" Jan 17 12:00:27.244364 containerd[1442]: time="2025-01-17T12:00:27.244306775Z" level=info msg="StartContainer for \"217f9139e02f8cbfda960fbf2cf3a31ff07b8261e1b34d2998d6563fa1b6629c\"" Jan 17 12:00:27.276345 systemd[1]: Started cri-containerd-217f9139e02f8cbfda960fbf2cf3a31ff07b8261e1b34d2998d6563fa1b6629c.scope - libcontainer container 217f9139e02f8cbfda960fbf2cf3a31ff07b8261e1b34d2998d6563fa1b6629c. Jan 17 12:00:27.314458 containerd[1442]: time="2025-01-17T12:00:27.313656565Z" level=info msg="StartContainer for \"217f9139e02f8cbfda960fbf2cf3a31ff07b8261e1b34d2998d6563fa1b6629c\" returns successfully" Jan 17 12:00:27.370053 kubelet[2465]: I0117 12:00:27.369988 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d954f8cd6-tmq7s" podStartSLOduration=22.752228554 podStartE2EDuration="39.36996146s" podCreationTimestamp="2025-01-17 11:59:48 +0000 UTC" firstStartedPulling="2025-01-17 12:00:10.609202987 +0000 UTC m=+35.539724799" lastFinishedPulling="2025-01-17 12:00:27.226935813 +0000 UTC m=+52.157457705" observedRunningTime="2025-01-17 12:00:27.355050695 +0000 UTC m=+52.285572547" watchObservedRunningTime="2025-01-17 12:00:27.36996146 +0000 UTC m=+52.300483272" Jan 17 12:00:28.345152 kubelet[2465]: I0117 12:00:28.345090 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:00:28.345611 kubelet[2465]: I0117 12:00:28.345457 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:00:28.751528 systemd[1]: Started sshd@11-10.0.0.32:22-10.0.0.1:55258.service - OpenSSH per-connection server daemon (10.0.0.1:55258). Jan 17 12:00:28.809847 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 55258 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:28.812643 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:28.819217 systemd-logind[1424]: New session 12 of user core. Jan 17 12:00:28.826366 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:00:29.046152 sshd[5085]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:29.049587 systemd[1]: sshd@11-10.0.0.32:22-10.0.0.1:55258.service: Deactivated successfully. Jan 17 12:00:29.051665 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:00:29.053124 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:00:29.054477 systemd-logind[1424]: Removed session 12. Jan 17 12:00:29.523833 kubelet[2465]: I0117 12:00:29.523387 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:00:29.551859 kubelet[2465]: I0117 12:00:29.549851 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d954f8cd6-mxjf9" podStartSLOduration=25.059238174 podStartE2EDuration="41.549832009s" podCreationTimestamp="2025-01-17 11:59:48 +0000 UTC" firstStartedPulling="2025-01-17 12:00:10.50430967 +0000 UTC m=+35.434831482" lastFinishedPulling="2025-01-17 12:00:26.994903505 +0000 UTC m=+51.925425317" observedRunningTime="2025-01-17 12:00:27.370955805 +0000 UTC m=+52.301477617" watchObservedRunningTime="2025-01-17 12:00:29.549832009 +0000 UTC m=+54.480353821" Jan 17 12:00:31.469680 containerd[1442]: time="2025-01-17T12:00:31.469634596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:31.470440 containerd[1442]: time="2025-01-17T12:00:31.470402661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 17 12:00:31.471598 containerd[1442]: time="2025-01-17T12:00:31.471355832Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:31.473465 containerd[1442]: time="2025-01-17T12:00:31.473429835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:31.474267 containerd[1442]: time="2025-01-17T12:00:31.474228785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 4.24707182s" Jan 17 12:00:31.474267 containerd[1442]: time="2025-01-17T12:00:31.474268870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 17 12:00:31.475223 containerd[1442]: time="2025-01-17T12:00:31.475057018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:00:31.482711 containerd[1442]: time="2025-01-17T12:00:31.482671020Z" level=info msg="CreateContainer within sandbox \"f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:00:31.496869 containerd[1442]: time="2025-01-17T12:00:31.496825837Z" level=info msg="CreateContainer within sandbox \"f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8deefb952aae3a27e694e5776d56d5887a4276c53d7a3500908b59b37805c433\"" Jan 17 12:00:31.498430 containerd[1442]: time="2025-01-17T12:00:31.497349669Z" level=info msg="StartContainer for \"8deefb952aae3a27e694e5776d56d5887a4276c53d7a3500908b59b37805c433\"" Jan 17 12:00:31.531393 systemd[1]: Started cri-containerd-8deefb952aae3a27e694e5776d56d5887a4276c53d7a3500908b59b37805c433.scope - libcontainer container 8deefb952aae3a27e694e5776d56d5887a4276c53d7a3500908b59b37805c433. Jan 17 12:00:31.567136 containerd[1442]: time="2025-01-17T12:00:31.567002681Z" level=info msg="StartContainer for \"8deefb952aae3a27e694e5776d56d5887a4276c53d7a3500908b59b37805c433\" returns successfully" Jan 17 12:00:32.372139 kubelet[2465]: I0117 12:00:32.369245 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d976f8577-5mqh2" podStartSLOduration=23.341646673 podStartE2EDuration="43.369217218s" podCreationTimestamp="2025-01-17 11:59:49 +0000 UTC" firstStartedPulling="2025-01-17 12:00:11.447324891 +0000 UTC m=+36.377846703" lastFinishedPulling="2025-01-17 12:00:31.474895436 +0000 UTC m=+56.405417248" observedRunningTime="2025-01-17 12:00:32.367641805 +0000 UTC m=+57.298163657" watchObservedRunningTime="2025-01-17 12:00:32.369217218 +0000 UTC m=+57.299739030" Jan 17 12:00:32.562684 containerd[1442]: time="2025-01-17T12:00:32.562584177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:32.564093 containerd[1442]: time="2025-01-17T12:00:32.563873151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 17 12:00:32.565159 containerd[1442]: time="2025-01-17T12:00:32.564982461Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:32.569718 containerd[1442]: time="2025-01-17T12:00:32.569657772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:32.572145 containerd[1442]: time="2025-01-17T12:00:32.571743614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.096650711s" Jan 17 12:00:32.572145 containerd[1442]: time="2025-01-17T12:00:32.571780779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 17 12:00:32.574639 containerd[1442]: time="2025-01-17T12:00:32.574608001Z" level=info msg="CreateContainer within sandbox \"0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:00:32.587562 containerd[1442]: time="2025-01-17T12:00:32.587440574Z" level=info msg="CreateContainer within sandbox \"0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0786cd17042e8951047f8f3013026ad21caa0d9aade2a54bba397238dd052172\"" Jan 17 12:00:32.588689 containerd[1442]: time="2025-01-17T12:00:32.588650698Z" level=info msg="StartContainer for \"0786cd17042e8951047f8f3013026ad21caa0d9aade2a54bba397238dd052172\"" Jan 17 12:00:32.630383 systemd[1]: Started cri-containerd-0786cd17042e8951047f8f3013026ad21caa0d9aade2a54bba397238dd052172.scope - libcontainer container 0786cd17042e8951047f8f3013026ad21caa0d9aade2a54bba397238dd052172. Jan 17 12:00:32.665621 containerd[1442]: time="2025-01-17T12:00:32.665544004Z" level=info msg="StartContainer for \"0786cd17042e8951047f8f3013026ad21caa0d9aade2a54bba397238dd052172\" returns successfully" Jan 17 12:00:32.667982 containerd[1442]: time="2025-01-17T12:00:32.667951849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:00:33.709545 containerd[1442]: time="2025-01-17T12:00:33.709494639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:33.710681 containerd[1442]: time="2025-01-17T12:00:33.710633871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 17 12:00:33.714832 containerd[1442]: time="2025-01-17T12:00:33.714772503Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:33.717296 containerd[1442]: time="2025-01-17T12:00:33.717245233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:33.717935 containerd[1442]: time="2025-01-17T12:00:33.717915803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.049820054s" Jan 17 12:00:33.717985 containerd[1442]: time="2025-01-17T12:00:33.717940766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 17 12:00:33.720012 containerd[1442]: time="2025-01-17T12:00:33.719863943Z" level=info msg="CreateContainer within sandbox \"0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:00:33.730925 containerd[1442]: time="2025-01-17T12:00:33.730467997Z" level=info msg="CreateContainer within sandbox \"0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9bbb9d81520e0041ea160d76bb5f12f7011474b600b6f1b197190aa390cb70bb\"" Jan 17 12:00:33.732515 containerd[1442]: time="2025-01-17T12:00:33.731131726Z" level=info msg="StartContainer for \"9bbb9d81520e0041ea160d76bb5f12f7011474b600b6f1b197190aa390cb70bb\"" Jan 17 12:00:33.768409 systemd[1]: Started cri-containerd-9bbb9d81520e0041ea160d76bb5f12f7011474b600b6f1b197190aa390cb70bb.scope - libcontainer container 9bbb9d81520e0041ea160d76bb5f12f7011474b600b6f1b197190aa390cb70bb. Jan 17 12:00:33.800835 containerd[1442]: time="2025-01-17T12:00:33.800744053Z" level=info msg="StartContainer for \"9bbb9d81520e0041ea160d76bb5f12f7011474b600b6f1b197190aa390cb70bb\" returns successfully" Jan 17 12:00:34.061925 systemd[1]: Started sshd@12-10.0.0.32:22-10.0.0.1:51396.service - OpenSSH per-connection server daemon (10.0.0.1:51396). Jan 17 12:00:34.108992 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 51396 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:34.110468 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:34.114159 systemd-logind[1424]: New session 13 of user core. Jan 17 12:00:34.125356 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:00:34.248842 kubelet[2465]: I0117 12:00:34.248794 2465 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:00:34.252154 kubelet[2465]: I0117 12:00:34.252127 2465 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:00:34.386298 sshd[5267]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:34.393613 systemd[1]: sshd@12-10.0.0.32:22-10.0.0.1:51396.service: Deactivated successfully. Jan 17 12:00:34.395154 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:00:34.396719 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:00:34.400060 kubelet[2465]: I0117 12:00:34.399863 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j7bcf" podStartSLOduration=25.194857895 podStartE2EDuration="45.399845804s" podCreationTimestamp="2025-01-17 11:59:49 +0000 UTC" firstStartedPulling="2025-01-17 12:00:13.513627307 +0000 UTC m=+38.444149119" lastFinishedPulling="2025-01-17 12:00:33.718615256 +0000 UTC m=+58.649137028" observedRunningTime="2025-01-17 12:00:34.399073823 +0000 UTC m=+59.329595595" watchObservedRunningTime="2025-01-17 12:00:34.399845804 +0000 UTC m=+59.330367616" Jan 17 12:00:34.407572 systemd[1]: Started sshd@13-10.0.0.32:22-10.0.0.1:51408.service - OpenSSH per-connection server daemon (10.0.0.1:51408). Jan 17 12:00:34.408698 systemd-logind[1424]: Removed session 13. Jan 17 12:00:34.439427 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 51408 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:34.440735 sshd[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:34.444387 systemd-logind[1424]: New session 14 of user core. Jan 17 12:00:34.451350 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:00:34.482079 systemd[1]: run-containerd-runc-k8s.io-9bbb9d81520e0041ea160d76bb5f12f7011474b600b6f1b197190aa390cb70bb-runc.mTsr60.mount: Deactivated successfully. Jan 17 12:00:34.647376 sshd[5282]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:34.657522 systemd[1]: sshd@13-10.0.0.32:22-10.0.0.1:51408.service: Deactivated successfully. Jan 17 12:00:34.660953 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:00:34.665557 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:00:34.674497 systemd[1]: Started sshd@14-10.0.0.32:22-10.0.0.1:51420.service - OpenSSH per-connection server daemon (10.0.0.1:51420). Jan 17 12:00:34.676125 systemd-logind[1424]: Removed session 14. Jan 17 12:00:34.706086 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 51420 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:34.707462 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:34.711035 systemd-logind[1424]: New session 15 of user core. Jan 17 12:00:34.720416 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:00:34.871348 sshd[5294]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:34.874560 systemd[1]: sshd@14-10.0.0.32:22-10.0.0.1:51420.service: Deactivated successfully. Jan 17 12:00:34.876263 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:00:34.877001 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:00:34.878031 systemd-logind[1424]: Removed session 15. Jan 17 12:00:35.148510 containerd[1442]: time="2025-01-17T12:00:35.148462618Z" level=info msg="StopPodSandbox for \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\"" Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.191 [WARNING][5325] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0", GenerateName:"calico-apiserver-7d954f8cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d954f8cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b", Pod:"calico-apiserver-7d954f8cd6-tmq7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia37daebb0c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.192 [INFO][5325] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.192 [INFO][5325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" iface="eth0" netns="" Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.192 [INFO][5325] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.192 [INFO][5325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.223 [INFO][5332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.223 [INFO][5332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.223 [INFO][5332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.231 [WARNING][5332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.231 [INFO][5332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.232 [INFO][5332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.235654 containerd[1442]: 2025-01-17 12:00:35.234 [INFO][5325] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:35.236176 containerd[1442]: time="2025-01-17T12:00:35.235693472Z" level=info msg="TearDown network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\" successfully" Jan 17 12:00:35.236176 containerd[1442]: time="2025-01-17T12:00:35.235718955Z" level=info msg="StopPodSandbox for \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\" returns successfully" Jan 17 12:00:35.236349 containerd[1442]: time="2025-01-17T12:00:35.236317594Z" level=info msg="RemovePodSandbox for \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\"" Jan 17 12:00:35.236397 containerd[1442]: time="2025-01-17T12:00:35.236357919Z" level=info msg="Forcibly stopping sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\"" Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.268 [WARNING][5354] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0", GenerateName:"calico-apiserver-7d954f8cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e3c1cb9-32d1-4d37-bc9c-8ecf060d43d5", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d954f8cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb7bd0489ff1f064c701a71fada08b3e41f660f2decaf1cd896fda3b6ca5ee9b", Pod:"calico-apiserver-7d954f8cd6-tmq7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia37daebb0c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.268 [INFO][5354] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.268 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" iface="eth0" netns="" Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.268 [INFO][5354] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.268 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.285 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.286 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.286 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.293 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.293 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" HandleID="k8s-pod-network.dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--tmq7s-eth0" Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.295 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.297775 containerd[1442]: 2025-01-17 12:00:35.296 [INFO][5354] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0" Jan 17 12:00:35.298231 containerd[1442]: time="2025-01-17T12:00:35.297824454Z" level=info msg="TearDown network for sandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\" successfully" Jan 17 12:00:35.321054 containerd[1442]: time="2025-01-17T12:00:35.320999195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:00:35.321207 containerd[1442]: time="2025-01-17T12:00:35.321078326Z" level=info msg="RemovePodSandbox \"dd2afa2329efa91b2b18827f7bf6a54096f578825faefd31139f31287f5dc7c0\" returns successfully" Jan 17 12:00:35.322816 containerd[1442]: time="2025-01-17T12:00:35.322510232Z" level=info msg="StopPodSandbox for \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\"" Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.354 [WARNING][5390] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0", GenerateName:"calico-kube-controllers-5d976f8577-", Namespace:"calico-system", SelfLink:"", UID:"5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d976f8577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6", Pod:"calico-kube-controllers-5d976f8577-5mqh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia32072bdda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.354 [INFO][5390] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.354 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" iface="eth0" netns="" Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.354 [INFO][5390] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.354 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.372 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.372 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.372 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.382 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.382 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.385 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.391453 containerd[1442]: 2025-01-17 12:00:35.387 [INFO][5390] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:35.391841 containerd[1442]: time="2025-01-17T12:00:35.391470464Z" level=info msg="TearDown network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\" successfully" Jan 17 12:00:35.391841 containerd[1442]: time="2025-01-17T12:00:35.391493667Z" level=info msg="StopPodSandbox for \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\" returns successfully" Jan 17 12:00:35.392073 containerd[1442]: time="2025-01-17T12:00:35.391989132Z" level=info msg="RemovePodSandbox for \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\"" Jan 17 12:00:35.392073 containerd[1442]: time="2025-01-17T12:00:35.392060981Z" level=info msg="Forcibly stopping sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\"" Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.425 [WARNING][5420] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0", GenerateName:"calico-kube-controllers-5d976f8577-", Namespace:"calico-system", SelfLink:"", UID:"5dfed3bd-1c28-4d8c-bf55-5f0787bcf7c5", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d976f8577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f76a31ac71a6c8f9bd6645f0ca8224fa6b2516bc32aeabccd70d2adf10d9bae6", Pod:"calico-kube-controllers-5d976f8577-5mqh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia32072bdda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.425 [INFO][5420] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.425 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" iface="eth0" netns="" Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.425 [INFO][5420] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.425 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.443 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.443 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.443 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.451 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.451 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" HandleID="k8s-pod-network.0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Workload="localhost-k8s-calico--kube--controllers--5d976f8577--5mqh2-eth0" Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.452 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.455825 containerd[1442]: 2025-01-17 12:00:35.454 [INFO][5420] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434" Jan 17 12:00:35.456236 containerd[1442]: time="2025-01-17T12:00:35.455854580Z" level=info msg="TearDown network for sandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\" successfully" Jan 17 12:00:35.466148 containerd[1442]: time="2025-01-17T12:00:35.465991221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:00:35.466304 containerd[1442]: time="2025-01-17T12:00:35.466269258Z" level=info msg="RemovePodSandbox \"0bb0ba1d5cab4545c0dfd64be4f487c0773838c5999074dba52161d526304434\" returns successfully" Jan 17 12:00:35.467038 containerd[1442]: time="2025-01-17T12:00:35.466999073Z" level=info msg="StopPodSandbox for \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\"" Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.498 [WARNING][5450] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j7bcf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"074801cb-bd28-41a4-b464-ef5bfb657c08", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f", Pod:"csi-node-driver-j7bcf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide1479d73c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.498 [INFO][5450] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.498 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" iface="eth0" netns="" Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.498 [INFO][5450] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.498 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.515 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.515 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.515 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.523 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.523 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.524 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.527574 containerd[1442]: 2025-01-17 12:00:35.526 [INFO][5450] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:35.527965 containerd[1442]: time="2025-01-17T12:00:35.527617617Z" level=info msg="TearDown network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\" successfully" Jan 17 12:00:35.527965 containerd[1442]: time="2025-01-17T12:00:35.527646181Z" level=info msg="StopPodSandbox for \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\" returns successfully" Jan 17 12:00:35.528091 containerd[1442]: time="2025-01-17T12:00:35.528062035Z" level=info msg="RemovePodSandbox for \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\"" Jan 17 12:00:35.528126 containerd[1442]: time="2025-01-17T12:00:35.528098680Z" level=info msg="Forcibly stopping sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\"" Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.559 [WARNING][5480] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j7bcf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"074801cb-bd28-41a4-b464-ef5bfb657c08", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3840fa2cecbad1b7d87ce6a03ba6616acd100fe0e3619faf6b50663535e25f", Pod:"csi-node-driver-j7bcf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide1479d73c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.559 [INFO][5480] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.559 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" iface="eth0" netns="" Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.559 [INFO][5480] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.559 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.577 [INFO][5487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.577 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.577 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.586 [WARNING][5487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.586 [INFO][5487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" HandleID="k8s-pod-network.b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Workload="localhost-k8s-csi--node--driver--j7bcf-eth0" Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.588 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.593435 containerd[1442]: 2025-01-17 12:00:35.590 [INFO][5480] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12" Jan 17 12:00:35.593823 containerd[1442]: time="2025-01-17T12:00:35.593477485Z" level=info msg="TearDown network for sandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\" successfully" Jan 17 12:00:35.597813 containerd[1442]: time="2025-01-17T12:00:35.597749802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:00:35.597937 containerd[1442]: time="2025-01-17T12:00:35.597819531Z" level=info msg="RemovePodSandbox \"b78391e2512c7a5fe0821cbd72999136727c5cad9d4a85d68b507395aa2d3f12\" returns successfully" Jan 17 12:00:35.598354 containerd[1442]: time="2025-01-17T12:00:35.598320356Z" level=info msg="StopPodSandbox for \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\"" Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.634 [WARNING][5510] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--28s6w-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf214c06-9b1d-4dac-a1ec-12f7f34d3261", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829", Pod:"coredns-6f6b679f8f-28s6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife70c094590", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.634 [INFO][5510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.634 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" iface="eth0" netns="" Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.634 [INFO][5510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.634 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.653 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.653 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.653 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.661 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.662 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.665 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.669681 containerd[1442]: 2025-01-17 12:00:35.668 [INFO][5510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:35.670354 containerd[1442]: time="2025-01-17T12:00:35.669720786Z" level=info msg="TearDown network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\" successfully" Jan 17 12:00:35.670354 containerd[1442]: time="2025-01-17T12:00:35.669745309Z" level=info msg="StopPodSandbox for \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\" returns successfully" Jan 17 12:00:35.670354 containerd[1442]: time="2025-01-17T12:00:35.670275338Z" level=info msg="RemovePodSandbox for \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\"" Jan 17 12:00:35.670354 containerd[1442]: time="2025-01-17T12:00:35.670307383Z" level=info msg="Forcibly stopping sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\"" Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.708 [WARNING][5541] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--28s6w-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf214c06-9b1d-4dac-a1ec-12f7f34d3261", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e59a1e5bd9bd3ba25081d33f67c4a823c78d3dbad3b306b68749adb845d4b829", Pod:"coredns-6f6b679f8f-28s6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife70c094590", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.708 [INFO][5541] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.708 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" iface="eth0" netns="" Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.708 [INFO][5541] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.708 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.726 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.726 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.726 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.734 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.734 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" HandleID="k8s-pod-network.addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Workload="localhost-k8s-coredns--6f6b679f8f--28s6w-eth0" Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.736 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.739863 containerd[1442]: 2025-01-17 12:00:35.737 [INFO][5541] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751" Jan 17 12:00:35.739863 containerd[1442]: time="2025-01-17T12:00:35.739351065Z" level=info msg="TearDown network for sandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\" successfully" Jan 17 12:00:35.742099 containerd[1442]: time="2025-01-17T12:00:35.742039816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:00:35.742159 containerd[1442]: time="2025-01-17T12:00:35.742104384Z" level=info msg="RemovePodSandbox \"addf4a3ebbde98a78ec691c6e1c2d748806a5739b054be87f34be7db8f5ca751\" returns successfully" Jan 17 12:00:35.742841 containerd[1442]: time="2025-01-17T12:00:35.742662137Z" level=info msg="StopPodSandbox for \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\"" Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.778 [WARNING][5571] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9990c74b-816e-4bf1-9470-a9d91243af45", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f", Pod:"coredns-6f6b679f8f-2qxc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d73cbdcb43", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.778 [INFO][5571] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.778 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" iface="eth0" netns="" Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.778 [INFO][5571] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.778 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.797 [INFO][5579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.798 [INFO][5579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.798 [INFO][5579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.806 [WARNING][5579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.807 [INFO][5579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.808 [INFO][5579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.811611 containerd[1442]: 2025-01-17 12:00:35.810 [INFO][5571] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:35.812011 containerd[1442]: time="2025-01-17T12:00:35.811656974Z" level=info msg="TearDown network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\" successfully" Jan 17 12:00:35.812011 containerd[1442]: time="2025-01-17T12:00:35.811680937Z" level=info msg="StopPodSandbox for \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\" returns successfully" Jan 17 12:00:35.812216 containerd[1442]: time="2025-01-17T12:00:35.812175441Z" level=info msg="RemovePodSandbox for \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\"" Jan 17 12:00:35.812249 containerd[1442]: time="2025-01-17T12:00:35.812224928Z" level=info msg="Forcibly stopping sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\"" Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.846 [WARNING][5601] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9990c74b-816e-4bf1-9470-a9d91243af45", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab4fbb05977a0b08161fa487c537cf03dd4de5e556f1c6a97a9dca94b6c42f7f", Pod:"coredns-6f6b679f8f-2qxc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d73cbdcb43", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.846 [INFO][5601] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.846 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" iface="eth0" netns="" Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.846 [INFO][5601] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.846 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.863 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.863 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.863 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.873 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.873 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" HandleID="k8s-pod-network.e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Workload="localhost-k8s-coredns--6f6b679f8f--2qxc4-eth0" Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.875 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.878977 containerd[1442]: 2025-01-17 12:00:35.877 [INFO][5601] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f" Jan 17 12:00:35.879378 containerd[1442]: time="2025-01-17T12:00:35.879004195Z" level=info msg="TearDown network for sandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\" successfully" Jan 17 12:00:35.929082 containerd[1442]: time="2025-01-17T12:00:35.929010556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:00:35.929244 containerd[1442]: time="2025-01-17T12:00:35.929096167Z" level=info msg="RemovePodSandbox \"e8269b6e1d7b9d8eda00e9aff3d042ffa04d54a55d72c2ff69881df4d2a9b02f\" returns successfully" Jan 17 12:00:35.929618 containerd[1442]: time="2025-01-17T12:00:35.929586071Z" level=info msg="StopPodSandbox for \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\"" Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.964 [WARNING][5631] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0", GenerateName:"calico-apiserver-7d954f8cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c20dd64-894e-4ff1-b1cd-c8495df31316", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d954f8cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f", Pod:"calico-apiserver-7d954f8cd6-mxjf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f95db42209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.965 [INFO][5631] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.965 [INFO][5631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" iface="eth0" netns="" Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.965 [INFO][5631] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.965 [INFO][5631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.982 [INFO][5638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.982 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.982 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.990 [WARNING][5638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.990 [INFO][5638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.991 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:35.994711 containerd[1442]: 2025-01-17 12:00:35.993 [INFO][5631] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:35.994711 containerd[1442]: time="2025-01-17T12:00:35.994688440Z" level=info msg="TearDown network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\" successfully" Jan 17 12:00:35.995094 containerd[1442]: time="2025-01-17T12:00:35.994714443Z" level=info msg="StopPodSandbox for \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\" returns successfully" Jan 17 12:00:35.995806 containerd[1442]: time="2025-01-17T12:00:35.995662247Z" level=info msg="RemovePodSandbox for \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\"" Jan 17 12:00:35.995806 containerd[1442]: time="2025-01-17T12:00:35.995695451Z" level=info msg="Forcibly stopping sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\"" Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.031 [WARNING][5661] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0", GenerateName:"calico-apiserver-7d954f8cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c20dd64-894e-4ff1-b1cd-c8495df31316", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 11, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d954f8cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3ccaa38190acb23cb1e3a54053033cd4ea0640c0cf99f6d60508106e6805d4f", Pod:"calico-apiserver-7d954f8cd6-mxjf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f95db42209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.031 [INFO][5661] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.031 [INFO][5661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" iface="eth0" netns="" Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.032 [INFO][5661] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.032 [INFO][5661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.050 [INFO][5669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.050 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.050 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.058 [WARNING][5669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.058 [INFO][5669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" HandleID="k8s-pod-network.1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Workload="localhost-k8s-calico--apiserver--7d954f8cd6--mxjf9-eth0" Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.059 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:00:36.062790 containerd[1442]: 2025-01-17 12:00:36.061 [INFO][5661] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096" Jan 17 12:00:36.063533 containerd[1442]: time="2025-01-17T12:00:36.062889248Z" level=info msg="TearDown network for sandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\" successfully" Jan 17 12:00:36.081986 containerd[1442]: time="2025-01-17T12:00:36.072720556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:00:36.081986 containerd[1442]: time="2025-01-17T12:00:36.081905382Z" level=info msg="RemovePodSandbox \"1a6f8de862b36904f696733422ad5d8c23138bccd11e7816eb827af270530096\" returns successfully" Jan 17 12:00:39.882740 systemd[1]: Started sshd@15-10.0.0.32:22-10.0.0.1:51422.service - OpenSSH per-connection server daemon (10.0.0.1:51422). Jan 17 12:00:39.923083 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 51422 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:39.924746 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:39.928844 systemd-logind[1424]: New session 16 of user core. Jan 17 12:00:39.936337 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:00:40.062676 sshd[5686]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:40.066136 systemd[1]: sshd@15-10.0.0.32:22-10.0.0.1:51422.service: Deactivated successfully. Jan 17 12:00:40.067851 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:00:40.068473 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:00:40.069372 systemd-logind[1424]: Removed session 16. Jan 17 12:00:43.140047 kubelet[2465]: E0117 12:00:43.139950 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:43.547487 kubelet[2465]: E0117 12:00:43.547453 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:45.080588 systemd[1]: Started sshd@16-10.0.0.32:22-10.0.0.1:42566.service - OpenSSH per-connection server daemon (10.0.0.1:42566). Jan 17 12:00:45.129238 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 42566 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:45.130483 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:45.135315 systemd-logind[1424]: New session 17 of user core. Jan 17 12:00:45.146943 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:00:45.329838 sshd[5725]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:45.332433 systemd[1]: sshd@16-10.0.0.32:22-10.0.0.1:42566.service: Deactivated successfully. Jan 17 12:00:45.334949 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:00:45.336860 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:00:45.341605 systemd-logind[1424]: Removed session 17. Jan 17 12:00:49.140969 kubelet[2465]: E0117 12:00:49.140409 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:49.764105 kubelet[2465]: I0117 12:00:49.763847 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:00:50.343404 systemd[1]: Started sshd@17-10.0.0.32:22-10.0.0.1:42570.service - OpenSSH per-connection server daemon (10.0.0.1:42570). Jan 17 12:00:50.383405 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 42570 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:50.386486 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:50.391058 systemd-logind[1424]: New session 18 of user core. Jan 17 12:00:50.403354 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:00:50.618011 sshd[5745]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:50.623740 systemd[1]: sshd@17-10.0.0.32:22-10.0.0.1:42570.service: Deactivated successfully. Jan 17 12:00:50.626597 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:00:50.627222 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:00:50.628032 systemd-logind[1424]: Removed session 18. Jan 17 12:00:52.140487 kubelet[2465]: E0117 12:00:52.140210 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:55.632063 systemd[1]: Started sshd@18-10.0.0.32:22-10.0.0.1:56464.service - OpenSSH per-connection server daemon (10.0.0.1:56464). Jan 17 12:00:55.669581 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 56464 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:55.671022 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:55.674738 systemd-logind[1424]: New session 19 of user core. Jan 17 12:00:55.683374 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:00:55.824372 sshd[5761]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:55.830095 systemd[1]: sshd@18-10.0.0.32:22-10.0.0.1:56464.service: Deactivated successfully. Jan 17 12:00:55.833942 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:00:55.834621 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:00:55.835592 systemd-logind[1424]: Removed session 19. Jan 17 12:01:00.834996 systemd[1]: Started sshd@19-10.0.0.32:22-10.0.0.1:56476.service - OpenSSH per-connection server daemon (10.0.0.1:56476). Jan 17 12:01:00.881258 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 56476 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:00.882786 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:00.887671 systemd-logind[1424]: New session 20 of user core. Jan 17 12:01:00.897388 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:01:01.034127 sshd[5783]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:01.037707 systemd[1]: sshd@19-10.0.0.32:22-10.0.0.1:56476.service: Deactivated successfully. Jan 17 12:01:01.041131 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:01:01.041749 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:01:01.042727 systemd-logind[1424]: Removed session 20. Jan 17 12:01:01.140633 kubelet[2465]: E0117 12:01:01.140525 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:06.044790 systemd[1]: Started sshd@20-10.0.0.32:22-10.0.0.1:37630.service - OpenSSH per-connection server daemon (10.0.0.1:37630). Jan 17 12:01:06.092249 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 37630 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:06.093778 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:06.098209 systemd-logind[1424]: New session 21 of user core. Jan 17 12:01:06.107430 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:01:06.273458 sshd[5819]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:06.277868 systemd[1]: sshd@20-10.0.0.32:22-10.0.0.1:37630.service: Deactivated successfully. Jan 17 12:01:06.279913 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:01:06.280748 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:01:06.281645 systemd-logind[1424]: Removed session 21. Jan 17 12:01:11.285037 systemd[1]: Started sshd@21-10.0.0.32:22-10.0.0.1:37640.service - OpenSSH per-connection server daemon (10.0.0.1:37640). Jan 17 12:01:11.330221 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 37640 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:11.330946 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:11.334498 systemd-logind[1424]: New session 22 of user core. Jan 17 12:01:11.342416 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:01:11.506519 sshd[5834]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:11.518212 systemd[1]: sshd@21-10.0.0.32:22-10.0.0.1:37640.service: Deactivated successfully. Jan 17 12:01:11.520903 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:01:11.522548 systemd-logind[1424]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:01:11.534519 systemd[1]: Started sshd@22-10.0.0.32:22-10.0.0.1:37652.service - OpenSSH per-connection server daemon (10.0.0.1:37652). Jan 17 12:01:11.536249 systemd-logind[1424]: Removed session 22. Jan 17 12:01:11.568053 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 37652 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:11.569543 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:11.573714 systemd-logind[1424]: New session 23 of user core. Jan 17 12:01:11.583363 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:01:11.780015 sshd[5848]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:11.790897 systemd[1]: sshd@22-10.0.0.32:22-10.0.0.1:37652.service: Deactivated successfully. Jan 17 12:01:11.792972 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:01:11.794220 systemd-logind[1424]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:01:11.801014 systemd[1]: Started sshd@23-10.0.0.32:22-10.0.0.1:37654.service - OpenSSH per-connection server daemon (10.0.0.1:37654). Jan 17 12:01:11.803404 systemd-logind[1424]: Removed session 23. Jan 17 12:01:11.838352 sshd[5861]: Accepted publickey for core from 10.0.0.1 port 37654 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:11.839609 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:11.843766 systemd-logind[1424]: New session 24 of user core. Jan 17 12:01:11.851368 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:01:13.354822 sshd[5861]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:13.364943 systemd[1]: sshd@23-10.0.0.32:22-10.0.0.1:37654.service: Deactivated successfully. Jan 17 12:01:13.369313 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:01:13.371901 systemd-logind[1424]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:01:13.378611 systemd[1]: Started sshd@24-10.0.0.32:22-10.0.0.1:46070.service - OpenSSH per-connection server daemon (10.0.0.1:46070). Jan 17 12:01:13.382375 systemd-logind[1424]: Removed session 24. Jan 17 12:01:13.414382 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 46070 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:13.415611 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:13.419350 systemd-logind[1424]: New session 25 of user core. Jan 17 12:01:13.429419 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:01:13.795248 sshd[5886]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:13.807852 systemd[1]: sshd@24-10.0.0.32:22-10.0.0.1:46070.service: Deactivated successfully. Jan 17 12:01:13.809645 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:01:13.811510 systemd-logind[1424]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:01:13.820790 systemd[1]: Started sshd@25-10.0.0.32:22-10.0.0.1:46072.service - OpenSSH per-connection server daemon (10.0.0.1:46072). Jan 17 12:01:13.824148 systemd-logind[1424]: Removed session 25. Jan 17 12:01:13.852670 sshd[5920]: Accepted publickey for core from 10.0.0.1 port 46072 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:13.854142 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:13.858048 systemd-logind[1424]: New session 26 of user core. Jan 17 12:01:13.867503 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:01:13.990374 sshd[5920]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:13.994225 systemd[1]: sshd@25-10.0.0.32:22-10.0.0.1:46072.service: Deactivated successfully. Jan 17 12:01:13.996366 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:01:13.997010 systemd-logind[1424]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:01:13.997790 systemd-logind[1424]: Removed session 26. Jan 17 12:01:19.000855 systemd[1]: Started sshd@26-10.0.0.32:22-10.0.0.1:46078.service - OpenSSH per-connection server daemon (10.0.0.1:46078). Jan 17 12:01:19.035906 sshd[5938]: Accepted publickey for core from 10.0.0.1 port 46078 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:19.037087 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:19.040518 systemd-logind[1424]: New session 27 of user core. Jan 17 12:01:19.050342 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:01:19.174415 sshd[5938]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:19.177553 systemd[1]: sshd@26-10.0.0.32:22-10.0.0.1:46078.service: Deactivated successfully. Jan 17 12:01:19.179489 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:01:19.181755 systemd-logind[1424]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:01:19.182493 systemd-logind[1424]: Removed session 27. Jan 17 12:01:22.139875 kubelet[2465]: E0117 12:01:22.139834 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:24.185172 systemd[1]: Started sshd@27-10.0.0.32:22-10.0.0.1:51252.service - OpenSSH per-connection server daemon (10.0.0.1:51252). Jan 17 12:01:24.220302 sshd[5952]: Accepted publickey for core from 10.0.0.1 port 51252 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:24.221549 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:24.225367 systemd-logind[1424]: New session 28 of user core. Jan 17 12:01:24.235325 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:01:24.352487 sshd[5952]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:24.355868 systemd[1]: sshd@27-10.0.0.32:22-10.0.0.1:51252.service: Deactivated successfully. Jan 17 12:01:24.357530 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:01:24.358046 systemd-logind[1424]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:01:24.358745 systemd-logind[1424]: Removed session 28. Jan 17 12:01:28.141150 kubelet[2465]: E0117 12:01:28.141104 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:28.141841 kubelet[2465]: E0117 12:01:28.141813 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:29.362951 systemd[1]: Started sshd@28-10.0.0.32:22-10.0.0.1:51264.service - OpenSSH per-connection server daemon (10.0.0.1:51264). Jan 17 12:01:29.417648 sshd[5966]: Accepted publickey for core from 10.0.0.1 port 51264 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:29.419041 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:29.424973 systemd-logind[1424]: New session 29 of user core. Jan 17 12:01:29.435386 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 12:01:29.564405 sshd[5966]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:29.568104 systemd[1]: sshd@28-10.0.0.32:22-10.0.0.1:51264.service: Deactivated successfully. Jan 17 12:01:29.569947 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 12:01:29.570554 systemd-logind[1424]: Session 29 logged out. Waiting for processes to exit. Jan 17 12:01:29.571291 systemd-logind[1424]: Removed session 29.