Dec 13 01:31:59.903446 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:31:59.903467 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:31:59.903477 kernel: KASLR enabled Dec 13 01:31:59.903482 kernel: efi: EFI v2.7 by EDK II Dec 13 01:31:59.903488 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:31:59.903494 kernel: random: crng init done Dec 13 01:31:59.903501 kernel: ACPI: Early table checksum verification disabled Dec 13 01:31:59.903507 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:31:59.903513 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:31:59.903521 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903527 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903533 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903539 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903545 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903552 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903560 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903566 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903573 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:31:59.903579 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:31:59.903585 kernel: NUMA: Failed to initialise from firmware Dec 13 01:31:59.903592 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:31:59.903598 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 01:31:59.903605 kernel: Zone ranges: Dec 13 01:31:59.903611 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:31:59.903618 kernel: DMA32 empty Dec 13 01:31:59.903625 kernel: Normal empty Dec 13 01:31:59.903631 kernel: Movable zone start for each node Dec 13 01:31:59.903638 kernel: Early memory node ranges Dec 13 01:31:59.903644 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:31:59.903650 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:31:59.903657 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:31:59.903663 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:31:59.903669 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:31:59.903675 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:31:59.903682 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:31:59.903688 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:31:59.903695 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:31:59.903702 kernel: psci: probing for conduit method from ACPI. Dec 13 01:31:59.903709 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:31:59.903715 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:31:59.903724 kernel: psci: Trusted OS migration not required Dec 13 01:31:59.903731 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:31:59.903738 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:31:59.903755 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:31:59.903761 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:31:59.903768 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:31:59.903775 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:31:59.903782 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:31:59.903789 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:31:59.903796 kernel: CPU features: detected: Spectre-v4 Dec 13 01:31:59.903803 kernel: CPU features: detected: Spectre-BHB Dec 13 01:31:59.903819 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:31:59.903826 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:31:59.903835 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:31:59.903854 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:31:59.903861 kernel: alternatives: applying boot alternatives Dec 13 01:31:59.903869 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:31:59.903878 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:31:59.903884 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:31:59.903891 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:31:59.903898 kernel: Fallback order for Node 0: 0 Dec 13 01:31:59.903905 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:31:59.903911 kernel: Policy zone: DMA Dec 13 01:31:59.903918 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:31:59.903926 kernel: software IO TLB: area num 4. Dec 13 01:31:59.903933 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:31:59.903940 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Dec 13 01:31:59.903947 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:31:59.903954 kernel: trace event string verifier disabled Dec 13 01:31:59.903961 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:31:59.903969 kernel: rcu: RCU event tracing is enabled. Dec 13 01:31:59.903976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:31:59.903983 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:31:59.903989 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:31:59.903996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:31:59.904003 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:31:59.904011 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:31:59.904018 kernel: GICv3: 256 SPIs implemented Dec 13 01:31:59.904025 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:31:59.904032 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:31:59.904038 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:31:59.904045 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:31:59.904052 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:31:59.904058 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:31:59.904065 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:31:59.904072 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:31:59.904079 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:31:59.904087 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:31:59.904094 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:31:59.904101 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:31:59.904108 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:31:59.904115 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:31:59.904122 kernel: arm-pv: using stolen time PV Dec 13 01:31:59.904129 kernel: Console: colour dummy device 80x25 Dec 13 01:31:59.904136 kernel: ACPI: Core revision 20230628 Dec 13 01:31:59.904144 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:31:59.904151 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:31:59.904159 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:31:59.904166 kernel: landlock: Up and running. Dec 13 01:31:59.904173 kernel: SELinux: Initializing. Dec 13 01:31:59.904180 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:31:59.904187 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:31:59.904194 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:31:59.904202 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:31:59.904209 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:31:59.904216 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:31:59.904228 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:31:59.904235 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:31:59.904242 kernel: Remapping and enabling EFI services. Dec 13 01:31:59.904249 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:31:59.904256 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:31:59.904263 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:31:59.904270 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:31:59.904277 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:31:59.904284 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:31:59.904291 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:31:59.904300 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:31:59.904309 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:31:59.904321 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:31:59.904331 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:31:59.904342 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:31:59.904350 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:31:59.904358 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:31:59.904365 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:31:59.904373 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:31:59.904381 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:31:59.904389 kernel: SMP: Total of 4 processors activated. Dec 13 01:31:59.904398 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:31:59.904405 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:31:59.904413 kernel: CPU features: detected: Common not Private translations Dec 13 01:31:59.904421 kernel: CPU features: detected: CRC32 instructions Dec 13 01:31:59.904428 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:31:59.904439 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:31:59.904453 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:31:59.904461 kernel: CPU features: detected: Privileged Access Never Dec 13 01:31:59.904468 kernel: CPU features: detected: RAS Extension Support Dec 13 01:31:59.904476 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:31:59.904483 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:31:59.904490 kernel: alternatives: applying system-wide alternatives Dec 13 01:31:59.904497 kernel: devtmpfs: initialized Dec 13 01:31:59.904505 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:31:59.904512 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:31:59.904521 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:31:59.904528 kernel: SMBIOS 3.0.0 present. Dec 13 01:31:59.904535 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:31:59.904543 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:31:59.904550 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:31:59.904557 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:31:59.904565 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:31:59.904572 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:31:59.904579 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Dec 13 01:31:59.904588 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:31:59.904595 kernel: cpuidle: using governor menu Dec 13 01:31:59.904602 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:31:59.904614 kernel: ASID allocator initialised with 32768 entries Dec 13 01:31:59.904621 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:31:59.904629 kernel: Serial: AMBA PL011 UART driver Dec 13 01:31:59.904636 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:31:59.904643 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:31:59.904650 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:31:59.904659 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:31:59.904666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:31:59.904674 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:31:59.904681 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:31:59.904689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:31:59.904696 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:31:59.904703 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:31:59.904710 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:31:59.904717 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:31:59.904726 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:31:59.904733 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:31:59.904745 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:31:59.904752 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:31:59.904760 kernel: ACPI: Interpreter enabled Dec 13 01:31:59.904767 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:31:59.904774 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:31:59.904782 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:31:59.904789 kernel: printk: console [ttyAMA0] enabled Dec 13 01:31:59.904798 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:31:59.904930 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:31:59.905007 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:31:59.905076 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:31:59.905144 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:31:59.905209 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:31:59.905219 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:31:59.905229 kernel: PCI host bridge to bus 0000:00 Dec 13 01:31:59.905302 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:31:59.905364 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:31:59.905425 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:31:59.905486 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:31:59.905567 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:31:59.905644 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:31:59.905716 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:31:59.905800 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:31:59.905933 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:31:59.906000 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:31:59.906079 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:31:59.906148 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:31:59.906208 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:31:59.906271 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:31:59.906329 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:31:59.906339 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:31:59.906346 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:31:59.906354 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:31:59.906361 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:31:59.906368 kernel: iommu: Default domain type: Translated Dec 13 01:31:59.906376 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:31:59.906385 kernel: efivars: Registered efivars operations Dec 13 01:31:59.906392 kernel: vgaarb: loaded Dec 13 01:31:59.906400 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:31:59.906407 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:31:59.906414 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:31:59.906422 kernel: pnp: PnP ACPI init Dec 13 01:31:59.906495 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:31:59.906506 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:31:59.906515 kernel: NET: Registered PF_INET protocol family Dec 13 01:31:59.906522 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:31:59.906530 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:31:59.906537 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:31:59.906545 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:31:59.906552 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:31:59.906560 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:31:59.906567 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:31:59.906574 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:31:59.906583 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:31:59.906590 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:31:59.906598 kernel: kvm [1]: HYP mode not available Dec 13 01:31:59.906605 kernel: Initialise system trusted keyrings Dec 13 01:31:59.906612 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:31:59.906620 kernel: Key type asymmetric registered Dec 13 01:31:59.906627 kernel: Asymmetric key parser 'x509' registered Dec 13 01:31:59.906634 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:31:59.906641 kernel: io scheduler mq-deadline registered Dec 13 01:31:59.906650 kernel: io scheduler kyber registered Dec 13 01:31:59.906657 kernel: io scheduler bfq registered Dec 13 01:31:59.906665 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:31:59.906672 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:31:59.906680 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:31:59.906756 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:31:59.906766 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:31:59.906774 kernel: thunder_xcv, ver 1.0 Dec 13 01:31:59.906781 kernel: thunder_bgx, ver 1.0 Dec 13 01:31:59.906790 kernel: nicpf, ver 1.0 Dec 13 01:31:59.906797 kernel: nicvf, ver 1.0 Dec 13 01:31:59.906881 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:31:59.906946 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:31:59 UTC (1734053519) Dec 13 01:31:59.906956 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:31:59.906964 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:31:59.906971 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:31:59.906979 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:31:59.906988 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:31:59.906996 kernel: Segment Routing with IPv6 Dec 13 01:31:59.907003 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:31:59.907010 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:31:59.907018 kernel: Key type dns_resolver registered Dec 13 01:31:59.907025 kernel: registered taskstats version 1 Dec 13 01:31:59.907032 kernel: Loading compiled-in X.509 certificates Dec 13 01:31:59.907040 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:31:59.907047 kernel: Key type .fscrypt registered Dec 13 01:31:59.907055 kernel: Key type fscrypt-provisioning registered Dec 13 01:31:59.907063 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:31:59.907071 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:31:59.907079 kernel: ima: No architecture policies found Dec 13 01:31:59.907086 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:31:59.907093 kernel: clk: Disabling unused clocks Dec 13 01:31:59.907100 kernel: Freeing unused kernel memory: 39360K Dec 13 01:31:59.907111 kernel: Run /init as init process Dec 13 01:31:59.907121 kernel: with arguments: Dec 13 01:31:59.907131 kernel: /init Dec 13 01:31:59.907138 kernel: with environment: Dec 13 01:31:59.907145 kernel: HOME=/ Dec 13 01:31:59.907153 kernel: TERM=linux Dec 13 01:31:59.907161 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:31:59.907170 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:31:59.907179 systemd[1]: Detected virtualization kvm. Dec 13 01:31:59.907187 systemd[1]: Detected architecture arm64. Dec 13 01:31:59.907196 systemd[1]: Running in initrd. Dec 13 01:31:59.907203 systemd[1]: No hostname configured, using default hostname. Dec 13 01:31:59.907211 systemd[1]: Hostname set to . Dec 13 01:31:59.907220 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:31:59.907228 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:31:59.907236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:31:59.907244 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:31:59.907253 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:31:59.907263 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:31:59.907271 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:31:59.907279 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:31:59.907288 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:31:59.907296 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:31:59.907304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:31:59.907313 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:31:59.907321 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:31:59.907330 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:31:59.907338 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:31:59.907346 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:31:59.907353 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:31:59.907361 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:31:59.907369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:31:59.907377 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:31:59.907391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:31:59.907399 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:31:59.907407 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:31:59.907415 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:31:59.907423 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:31:59.907431 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:31:59.907439 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:31:59.907447 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:31:59.907454 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:31:59.907464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:31:59.907472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:59.907479 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:31:59.907487 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:31:59.907495 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:31:59.907522 systemd-journald[238]: Collecting audit messages is disabled. Dec 13 01:31:59.907541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:31:59.907549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:59.907558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:31:59.907566 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:31:59.907575 systemd-journald[238]: Journal started Dec 13 01:31:59.907593 systemd-journald[238]: Runtime Journal (/run/log/journal/e60c395ba85c4a498d6e817bc727255b) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:31:59.894840 systemd-modules-load[239]: Inserted module 'overlay' Dec 13 01:31:59.909629 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:31:59.909648 kernel: Bridge firewalling registered Dec 13 01:31:59.910854 systemd-modules-load[239]: Inserted module 'br_netfilter' Dec 13 01:31:59.911206 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:31:59.912850 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:31:59.917066 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:31:59.918942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:31:59.921327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:31:59.925248 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:59.928004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:31:59.930837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:31:59.932625 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:31:59.937692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:31:59.940539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:31:59.942243 dracut-cmdline[270]: dracut-dracut-053 Dec 13 01:31:59.944194 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:31:59.971139 systemd-resolved[283]: Positive Trust Anchors: Dec 13 01:31:59.971155 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:31:59.971186 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:31:59.975884 systemd-resolved[283]: Defaulting to hostname 'linux'. Dec 13 01:31:59.978522 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:31:59.979555 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:00.012838 kernel: SCSI subsystem initialized Dec 13 01:32:00.017828 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:32:00.024867 kernel: iscsi: registered transport (tcp) Dec 13 01:32:00.037839 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:32:00.037858 kernel: QLogic iSCSI HBA Driver Dec 13 01:32:00.079682 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:32:00.090014 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:32:00.106209 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:32:00.106242 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:32:00.107753 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:32:00.156851 kernel: raid6: neonx8 gen() 15741 MB/s Dec 13 01:32:00.173844 kernel: raid6: neonx4 gen() 15596 MB/s Dec 13 01:32:00.190837 kernel: raid6: neonx2 gen() 13177 MB/s Dec 13 01:32:00.207844 kernel: raid6: neonx1 gen() 10442 MB/s Dec 13 01:32:00.224843 kernel: raid6: int64x8 gen() 6941 MB/s Dec 13 01:32:00.241845 kernel: raid6: int64x4 gen() 7319 MB/s Dec 13 01:32:00.258844 kernel: raid6: int64x2 gen() 6112 MB/s Dec 13 01:32:00.275919 kernel: raid6: int64x1 gen() 5039 MB/s Dec 13 01:32:00.275946 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Dec 13 01:32:00.293898 kernel: raid6: .... xor() 11906 MB/s, rmw enabled Dec 13 01:32:00.293930 kernel: raid6: using neon recovery algorithm Dec 13 01:32:00.299204 kernel: xor: measuring software checksum speed Dec 13 01:32:00.299219 kernel: 8regs : 19831 MB/sec Dec 13 01:32:00.299884 kernel: 32regs : 19650 MB/sec Dec 13 01:32:00.301115 kernel: arm64_neon : 26839 MB/sec Dec 13 01:32:00.301130 kernel: xor: using function: arm64_neon (26839 MB/sec) Dec 13 01:32:00.352840 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:32:00.363325 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:32:00.372997 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:00.384373 systemd-udevd[459]: Using default interface naming scheme 'v255'. Dec 13 01:32:00.387464 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:00.389968 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:32:00.404854 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Dec 13 01:32:00.431553 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:32:00.441986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:32:00.482146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:00.487966 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:32:00.499672 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:32:00.501850 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:32:00.502910 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:00.504799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:32:00.511998 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:32:00.522309 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:32:00.526530 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:32:00.531336 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:32:00.531443 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:32:00.531455 kernel: GPT:9289727 != 19775487 Dec 13 01:32:00.531464 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:32:00.531478 kernel: GPT:9289727 != 19775487 Dec 13 01:32:00.531487 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:32:00.531499 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:32:00.531248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:32:00.531357 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:00.533332 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:32:00.534363 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:32:00.534495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:00.537262 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:00.546063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:00.551840 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Dec 13 01:32:00.554874 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (510) Dec 13 01:32:00.559389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:00.565320 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:32:00.569774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:32:00.576785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:32:00.580506 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:32:00.581590 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:32:00.591000 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:32:00.594980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:32:00.597377 disk-uuid[547]: Primary Header is updated. Dec 13 01:32:00.597377 disk-uuid[547]: Secondary Entries is updated. Dec 13 01:32:00.597377 disk-uuid[547]: Secondary Header is updated. Dec 13 01:32:00.600832 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:32:00.614130 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:01.615865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:32:01.616087 disk-uuid[548]: The operation has completed successfully. Dec 13 01:32:01.637220 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:32:01.637323 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:32:01.659977 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:32:01.662855 sh[571]: Success Dec 13 01:32:01.672840 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:32:01.700876 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:32:01.717214 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:32:01.719455 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:32:01.728902 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:32:01.728935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:32:01.730034 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:32:01.730051 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:32:01.731418 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:32:01.734888 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:32:01.736041 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:32:01.743017 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:32:01.744393 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:32:01.753003 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:32:01.753050 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:32:01.753832 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:32:01.756062 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:32:01.762637 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:32:01.764328 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:32:01.769077 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:32:01.774967 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:32:01.839939 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:32:01.860030 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:32:01.871767 ignition[660]: Ignition 2.19.0 Dec 13 01:32:01.871777 ignition[660]: Stage: fetch-offline Dec 13 01:32:01.871837 ignition[660]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:01.871860 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:01.872021 ignition[660]: parsed url from cmdline: "" Dec 13 01:32:01.872024 ignition[660]: no config URL provided Dec 13 01:32:01.872029 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:32:01.872036 ignition[660]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:32:01.872061 ignition[660]: op(1): [started] loading QEMU firmware config module Dec 13 01:32:01.872065 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:32:01.885426 ignition[660]: op(1): [finished] loading QEMU firmware config module Dec 13 01:32:01.888321 systemd-networkd[766]: lo: Link UP Dec 13 01:32:01.888333 systemd-networkd[766]: lo: Gained carrier Dec 13 01:32:01.889068 systemd-networkd[766]: Enumeration completed Dec 13 01:32:01.889156 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:32:01.890333 systemd[1]: Reached target network.target - Network. Dec 13 01:32:01.891290 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:01.891293 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:32:01.892114 systemd-networkd[766]: eth0: Link UP Dec 13 01:32:01.892117 systemd-networkd[766]: eth0: Gained carrier Dec 13 01:32:01.892124 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:01.915867 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:32:01.935436 ignition[660]: parsing config with SHA512: 2dedb1aec799df5a8ff940e34a870c2cc9aa99c18dc4c81b9e6ce993c64ebc139f72b1a8cc62f28324c0b6afd2995f817b62199b2ff713ad63a177685fb3fec0 Dec 13 01:32:01.939722 unknown[660]: fetched base config from "system" Dec 13 01:32:01.939731 unknown[660]: fetched user config from "qemu" Dec 13 01:32:01.941347 ignition[660]: fetch-offline: fetch-offline passed Dec 13 01:32:01.942873 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:32:01.941437 ignition[660]: Ignition finished successfully Dec 13 01:32:01.944055 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:32:01.954971 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:32:01.965678 ignition[775]: Ignition 2.19.0 Dec 13 01:32:01.965688 ignition[775]: Stage: kargs Dec 13 01:32:01.965879 ignition[775]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:01.965889 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:01.966711 ignition[775]: kargs: kargs passed Dec 13 01:32:01.966766 ignition[775]: Ignition finished successfully Dec 13 01:32:01.968933 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:32:01.971353 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:32:01.984987 ignition[784]: Ignition 2.19.0 Dec 13 01:32:01.984998 ignition[784]: Stage: disks Dec 13 01:32:01.985180 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:01.985190 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:01.986040 ignition[784]: disks: disks passed Dec 13 01:32:01.986086 ignition[784]: Ignition finished successfully Dec 13 01:32:01.988850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:32:01.989946 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:32:01.991473 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:32:01.993235 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:32:01.994848 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:32:01.996423 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:32:02.004016 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:32:02.014694 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:32:02.019129 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:32:02.027916 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:32:02.068829 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:32:02.069219 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:32:02.070360 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:32:02.081903 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:32:02.083478 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:32:02.084567 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:32:02.084647 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:32:02.091345 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Dec 13 01:32:02.091368 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:32:02.084706 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:32:02.095617 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:32:02.095635 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:32:02.090930 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:32:02.098127 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:32:02.094903 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:32:02.099557 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:32:02.139755 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:32:02.144294 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:32:02.148165 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:32:02.152055 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:32:02.227325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:32:02.240914 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:32:02.242328 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:32:02.247828 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:32:02.265080 ignition[914]: INFO : Ignition 2.19.0 Dec 13 01:32:02.265080 ignition[914]: INFO : Stage: mount Dec 13 01:32:02.265080 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:02.265080 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:02.265080 ignition[914]: INFO : mount: mount passed Dec 13 01:32:02.265080 ignition[914]: INFO : Ignition finished successfully Dec 13 01:32:02.264263 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:32:02.266763 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:32:02.279955 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:32:02.727782 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:32:02.737009 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:32:02.743532 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Dec 13 01:32:02.743561 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:32:02.743572 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:32:02.745132 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:32:02.747841 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:32:02.748525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:32:02.765355 ignition[946]: INFO : Ignition 2.19.0 Dec 13 01:32:02.765355 ignition[946]: INFO : Stage: files Dec 13 01:32:02.766772 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:02.766772 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:02.766772 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:32:02.769772 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:32:02.769772 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:32:02.769772 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:32:02.769772 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:32:02.774368 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:32:02.774368 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:32:02.774368 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:32:02.769866 unknown[946]: wrote ssh authorized keys file for user: core Dec 13 01:32:03.005034 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:32:03.066784 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:32:03.068517 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:32:03.311360 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:32:03.687365 systemd-networkd[766]: eth0: Gained IPv6LL Dec 13 01:32:03.717597 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:32:03.717597 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:32:03.720853 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:32:03.720853 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:32:03.720853 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:32:03.720853 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:32:03.720853 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:32:03.720853 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:32:03.720853 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:32:03.720853 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:32:03.742572 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:32:03.746582 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:32:03.747844 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:32:03.747844 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:32:03.747844 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:32:03.747844 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:32:03.747844 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:32:03.747844 ignition[946]: INFO : files: files passed Dec 13 01:32:03.747844 ignition[946]: INFO : Ignition finished successfully Dec 13 01:32:03.749003 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:32:03.757071 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:32:03.760016 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:32:03.762558 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:32:03.762681 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:32:03.768191 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:32:03.771971 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:03.771971 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:03.774638 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:03.774480 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:32:03.775840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:32:03.784970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:32:03.805874 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:32:03.805984 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:32:03.807995 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:32:03.808806 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:32:03.810540 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:32:03.811396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:32:03.827567 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:32:03.829966 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:32:03.841881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:03.843861 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:03.844903 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:32:03.846381 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:32:03.846508 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:32:03.848790 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:32:03.850490 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:32:03.851857 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:32:03.853403 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:32:03.855065 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:32:03.856764 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:32:03.858343 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:32:03.860034 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:32:03.861657 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:32:03.863219 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:32:03.864481 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:32:03.864608 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:32:03.866540 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:32:03.868288 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:32:03.870138 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:32:03.870243 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:32:03.871870 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:32:03.872001 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:32:03.874534 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:32:03.874657 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:32:03.876358 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:32:03.877601 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:32:03.880873 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:32:03.883216 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:32:03.884090 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:32:03.885410 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:32:03.885505 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:32:03.886763 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:32:03.886865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:32:03.888125 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:32:03.888236 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:32:03.889830 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:32:03.889935 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:32:03.903019 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:32:03.904549 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:32:03.905293 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:32:03.905421 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:03.907178 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:32:03.907275 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:32:03.913379 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:32:03.914869 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:32:03.917317 ignition[1001]: INFO : Ignition 2.19.0 Dec 13 01:32:03.917317 ignition[1001]: INFO : Stage: umount Dec 13 01:32:03.917317 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:03.917317 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:03.917317 ignition[1001]: INFO : umount: umount passed Dec 13 01:32:03.917317 ignition[1001]: INFO : Ignition finished successfully Dec 13 01:32:03.918400 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:32:03.918509 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:32:03.920648 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:32:03.921074 systemd[1]: Stopped target network.target - Network. Dec 13 01:32:03.922336 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:32:03.922397 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:32:03.923762 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:32:03.923822 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:32:03.925336 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:32:03.925378 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:32:03.927710 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:32:03.927769 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:32:03.929550 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:32:03.930999 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:32:03.936986 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:32:03.937094 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:32:03.937862 systemd-networkd[766]: eth0: DHCPv6 lease lost Dec 13 01:32:03.939302 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:32:03.939375 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:32:03.943329 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:32:03.943433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:32:03.944834 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:32:03.944871 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:32:03.952949 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:32:03.954031 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:32:03.954093 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:32:03.956052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:32:03.956101 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:03.956975 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:32:03.957015 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:32:03.958829 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:03.968968 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:32:03.969094 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:32:03.974201 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:32:03.974304 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:32:03.976951 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:32:03.977050 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:32:03.980469 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:32:03.980606 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:03.982048 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:32:03.982089 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:32:03.983251 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:32:03.983279 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:32:03.985067 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:32:03.985118 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:32:03.987351 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:32:03.987399 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:32:03.989551 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:32:03.989593 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:03.997972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:32:03.998849 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:32:03.998910 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:32:04.000889 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:32:04.000939 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:32:04.002572 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:32:04.002612 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:32:04.004428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:32:04.004472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:04.006528 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:32:04.007841 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:32:04.009326 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:32:04.011417 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:32:04.021032 systemd[1]: Switching root. Dec 13 01:32:04.042217 systemd-journald[238]: Journal stopped Dec 13 01:32:04.738570 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Dec 13 01:32:04.738630 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:32:04.738644 kernel: SELinux: policy capability open_perms=1 Dec 13 01:32:04.738654 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:32:04.738664 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:32:04.738673 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:32:04.738683 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:32:04.738693 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:32:04.738702 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:32:04.738716 kernel: audit: type=1403 audit(1734053524.186:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:32:04.738744 systemd[1]: Successfully loaded SELinux policy in 31.668ms. Dec 13 01:32:04.738765 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.005ms. Dec 13 01:32:04.738777 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:32:04.738789 systemd[1]: Detected virtualization kvm. Dec 13 01:32:04.738799 systemd[1]: Detected architecture arm64. Dec 13 01:32:04.738840 systemd[1]: Detected first boot. Dec 13 01:32:04.738853 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:32:04.738864 zram_generator::config[1045]: No configuration found. Dec 13 01:32:04.738878 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:32:04.738889 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:32:04.738900 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:32:04.738910 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:32:04.738921 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:32:04.738934 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:32:04.738945 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:32:04.738955 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:32:04.738966 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:32:04.738978 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:32:04.738989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:32:04.739000 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:32:04.739010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:32:04.739021 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:32:04.739032 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:32:04.739043 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:32:04.739054 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:32:04.739066 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:32:04.739077 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:32:04.739087 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:32:04.739098 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:32:04.739109 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:32:04.739119 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:32:04.739130 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:32:04.739140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:04.739152 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:32:04.739164 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:32:04.739175 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:32:04.739185 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:32:04.739196 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:32:04.739207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:32:04.739217 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:32:04.739228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:32:04.739239 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:32:04.739251 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:32:04.739262 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:32:04.739273 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:32:04.739283 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:32:04.739294 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:32:04.739304 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:32:04.739316 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:32:04.739326 systemd[1]: Reached target machines.target - Containers. Dec 13 01:32:04.739337 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:32:04.739349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:04.739360 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:32:04.739371 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:32:04.739382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:04.739394 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:32:04.739404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:04.739415 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:32:04.739425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:04.739438 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:32:04.739449 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:32:04.739459 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:32:04.739470 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:32:04.739480 kernel: fuse: init (API version 7.39) Dec 13 01:32:04.739490 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:32:04.739500 kernel: ACPI: bus type drm_connector registered Dec 13 01:32:04.739510 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:32:04.739520 kernel: loop: module loaded Dec 13 01:32:04.739532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:32:04.739543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:32:04.739554 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:32:04.739564 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:32:04.739575 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:32:04.739586 systemd[1]: Stopped verity-setup.service. Dec 13 01:32:04.739615 systemd-journald[1116]: Collecting audit messages is disabled. Dec 13 01:32:04.739637 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:32:04.739652 systemd-journald[1116]: Journal started Dec 13 01:32:04.739673 systemd-journald[1116]: Runtime Journal (/run/log/journal/e60c395ba85c4a498d6e817bc727255b) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:32:04.739709 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:32:04.547154 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:32:04.561923 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:32:04.562280 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:32:04.743409 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:32:04.744068 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:32:04.744994 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:32:04.746043 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:32:04.747155 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:32:04.748890 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:32:04.750121 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:32:04.751444 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:32:04.751599 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:32:04.754076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:04.754209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:04.755535 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:32:04.755693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:32:04.756930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:04.757067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:04.758303 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:32:04.758428 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:32:04.759609 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:04.759764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:04.761018 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:32:04.762292 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:32:04.763587 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:32:04.775590 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:32:04.788920 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:32:04.790836 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:32:04.791733 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:32:04.791771 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:32:04.793540 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:32:04.795674 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:32:04.797691 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:32:04.798695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:04.800320 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:32:04.803057 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:32:04.804246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:32:04.806006 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:32:04.807109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:32:04.810044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:04.813964 systemd-journald[1116]: Time spent on flushing to /var/log/journal/e60c395ba85c4a498d6e817bc727255b is 21.160ms for 855 entries. Dec 13 01:32:04.813964 systemd-journald[1116]: System Journal (/var/log/journal/e60c395ba85c4a498d6e817bc727255b) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:32:04.848080 systemd-journald[1116]: Received client request to flush runtime journal. Dec 13 01:32:04.848117 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 01:32:04.814075 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:32:04.819344 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:32:04.822000 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:04.832718 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:32:04.834188 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:32:04.836742 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:32:04.838653 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:32:04.842270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:04.844336 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:32:04.850965 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:32:04.856084 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:32:04.862057 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:32:04.863494 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:32:04.868621 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Dec 13 01:32:04.868635 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Dec 13 01:32:04.872760 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:32:04.874211 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:32:04.878189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:32:04.890229 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:32:04.892017 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:32:04.892954 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:32:04.905832 kernel: loop2: detected capacity change from 0 to 194512 Dec 13 01:32:04.922278 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:32:04.929160 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:32:04.940302 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Dec 13 01:32:04.940319 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Dec 13 01:32:04.944218 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:32:04.944393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:32:04.950836 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:32:04.954822 kernel: loop5: detected capacity change from 0 to 194512 Dec 13 01:32:04.958574 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:32:04.958962 (sd-merge)[1183]: Merged extensions into '/usr'. Dec 13 01:32:04.962472 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:32:04.962512 systemd[1]: Reloading... Dec 13 01:32:05.014219 zram_generator::config[1207]: No configuration found. Dec 13 01:32:05.077683 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:32:05.109789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:05.145382 systemd[1]: Reloading finished in 182 ms. Dec 13 01:32:05.173851 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:32:05.175037 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:32:05.186091 systemd[1]: Starting ensure-sysext.service... Dec 13 01:32:05.187729 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:32:05.194431 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:32:05.194447 systemd[1]: Reloading... Dec 13 01:32:05.203937 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:32:05.204186 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:32:05.204840 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:32:05.205059 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Dec 13 01:32:05.205112 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Dec 13 01:32:05.207471 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:32:05.207477 systemd-tmpfiles[1246]: Skipping /boot Dec 13 01:32:05.214269 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:32:05.214276 systemd-tmpfiles[1246]: Skipping /boot Dec 13 01:32:05.237982 zram_generator::config[1272]: No configuration found. Dec 13 01:32:05.319995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:05.355060 systemd[1]: Reloading finished in 160 ms. Dec 13 01:32:05.371537 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:32:05.385175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:32:05.392275 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:05.394562 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:32:05.396599 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:32:05.401144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:32:05.407082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:05.410386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:32:05.414481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:05.419125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:05.422753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:05.426424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:05.427699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:05.437086 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:32:05.440227 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:32:05.441473 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Dec 13 01:32:05.443534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:05.443661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:05.445134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:05.445267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:05.446871 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:05.446993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:05.455383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:05.470128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:05.475081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:05.477141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:05.477912 augenrules[1341]: No rules Dec 13 01:32:05.479417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:05.484065 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:32:05.485553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:05.487228 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:32:05.490392 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:05.491788 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:32:05.495417 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:32:05.499508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:05.499665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:05.501523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:05.501648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:05.503482 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:05.503611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:05.505435 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:32:05.518947 systemd[1]: Finished ensure-sysext.service. Dec 13 01:32:05.523747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:05.535872 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1357) Dec 13 01:32:05.538037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:05.538837 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1357) Dec 13 01:32:05.541658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:32:05.543848 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1350) Dec 13 01:32:05.545286 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:05.551023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:05.552097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:05.553594 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:32:05.558988 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:32:05.559996 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:32:05.560403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:05.560529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:05.563284 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:32:05.563415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:32:05.567214 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:32:05.568627 systemd-resolved[1314]: Positive Trust Anchors: Dec 13 01:32:05.569237 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:32:05.569345 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:32:05.571638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:32:05.574272 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:05.574687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:05.578194 systemd-resolved[1314]: Defaulting to hostname 'linux'. Dec 13 01:32:05.579178 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:05.579311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:05.582030 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:32:05.585714 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:05.593347 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:32:05.594338 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:32:05.594395 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:32:05.614416 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:32:05.615556 systemd-networkd[1389]: lo: Link UP Dec 13 01:32:05.615569 systemd-networkd[1389]: lo: Gained carrier Dec 13 01:32:05.616521 systemd-networkd[1389]: Enumeration completed Dec 13 01:32:05.616603 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:32:05.616963 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:05.616973 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:32:05.617534 systemd-networkd[1389]: eth0: Link UP Dec 13 01:32:05.617537 systemd-networkd[1389]: eth0: Gained carrier Dec 13 01:32:05.617549 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:05.618097 systemd[1]: Reached target network.target - Network. Dec 13 01:32:05.626091 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:32:05.635916 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:32:05.640329 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:32:05.640998 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:32:05.641262 systemd-timesyncd[1391]: Initial clock synchronization to Fri 2024-12-13 01:32:05.803703 UTC. Dec 13 01:32:05.641576 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:32:05.658032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:05.674110 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:32:05.686078 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:32:05.698359 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:32:05.699116 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:05.731147 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:32:05.732444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:32:05.733464 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:32:05.734453 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:32:05.735491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:32:05.736694 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:32:05.737802 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:32:05.738839 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:32:05.739833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:32:05.739861 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:32:05.740564 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:32:05.742270 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:32:05.744350 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:32:05.754642 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:32:05.756677 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:32:05.758113 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:32:05.759133 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:32:05.759975 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:32:05.760771 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:32:05.760804 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:32:05.761668 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:32:05.763534 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:32:05.763931 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:32:05.766368 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:32:05.770083 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:32:05.771054 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:32:05.773173 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:32:05.777000 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:32:05.778460 jq[1418]: false Dec 13 01:32:05.780417 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:32:05.783524 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:32:05.786605 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:32:05.792876 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:32:05.793239 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:32:05.796573 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:32:05.798323 extend-filesystems[1419]: Found loop3 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found loop4 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found loop5 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found vda Dec 13 01:32:05.799587 extend-filesystems[1419]: Found vda1 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found vda2 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found vda3 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found usr Dec 13 01:32:05.799587 extend-filesystems[1419]: Found vda4 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found vda6 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found vda7 Dec 13 01:32:05.799587 extend-filesystems[1419]: Found vda9 Dec 13 01:32:05.799587 extend-filesystems[1419]: Checking size of /dev/vda9 Dec 13 01:32:05.798628 dbus-daemon[1417]: [system] SELinux support is enabled Dec 13 01:32:05.800388 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:32:05.804412 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:32:05.815989 jq[1435]: true Dec 13 01:32:05.807936 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:32:05.812175 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:32:05.814874 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:32:05.815185 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:32:05.815316 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:32:05.818750 extend-filesystems[1419]: Resized partition /dev/vda9 Dec 13 01:32:05.817557 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:32:05.817689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:32:05.826541 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:32:05.833182 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:32:05.829474 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:32:05.829521 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:32:05.831451 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:32:05.831470 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:32:05.841786 jq[1442]: true Dec 13 01:32:05.842124 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1367) Dec 13 01:32:05.842450 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:32:05.852637 update_engine[1431]: I20241213 01:32:05.852367 1431 main.cc:92] Flatcar Update Engine starting Dec 13 01:32:05.854767 tar[1441]: linux-arm64/helm Dec 13 01:32:05.864126 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:32:05.864219 update_engine[1431]: I20241213 01:32:05.864183 1431 update_check_scheduler.cc:74] Next update check in 4m10s Dec 13 01:32:05.867962 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:32:05.877072 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:32:05.869989 systemd-logind[1427]: New seat seat0. Dec 13 01:32:05.877046 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:32:05.878424 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:32:05.893527 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:32:05.893527 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:32:05.893527 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:32:05.890559 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:32:05.898897 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Dec 13 01:32:05.891873 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:32:05.906204 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:32:05.907580 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:32:05.909269 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:32:05.932414 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:32:06.049872 containerd[1444]: time="2024-12-13T01:32:06.048443819Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:32:06.074457 containerd[1444]: time="2024-12-13T01:32:06.074403669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.075881865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.075918026Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.075934637Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.076095281Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.076111769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.076162215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.076173929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.076348082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.076363917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.076376896Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:06.076994 containerd[1444]: time="2024-12-13T01:32:06.076386487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:06.077273 containerd[1444]: time="2024-12-13T01:32:06.076455789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:06.077273 containerd[1444]: time="2024-12-13T01:32:06.076632840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:06.077273 containerd[1444]: time="2024-12-13T01:32:06.076720957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:06.077273 containerd[1444]: time="2024-12-13T01:32:06.076733405Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:32:06.077273 containerd[1444]: time="2024-12-13T01:32:06.076797646Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:32:06.077273 containerd[1444]: time="2024-12-13T01:32:06.076856949Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:32:06.081302 containerd[1444]: time="2024-12-13T01:32:06.081274150Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:32:06.081425 containerd[1444]: time="2024-12-13T01:32:06.081411202Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:32:06.081537 containerd[1444]: time="2024-12-13T01:32:06.081523196Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:32:06.081615 containerd[1444]: time="2024-12-13T01:32:06.081600864Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:32:06.081722 containerd[1444]: time="2024-12-13T01:32:06.081706940Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:32:06.081956 containerd[1444]: time="2024-12-13T01:32:06.081934069Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:32:06.082386 containerd[1444]: time="2024-12-13T01:32:06.082364818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:32:06.082554 containerd[1444]: time="2024-12-13T01:32:06.082534155Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:32:06.082619 containerd[1444]: time="2024-12-13T01:32:06.082605660Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:32:06.082672 containerd[1444]: time="2024-12-13T01:32:06.082659657Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:32:06.082727 containerd[1444]: time="2024-12-13T01:32:06.082715042Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:32:06.082781 containerd[1444]: time="2024-12-13T01:32:06.082768548Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:32:06.082883 containerd[1444]: time="2024-12-13T01:32:06.082866543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:32:06.082939 containerd[1444]: time="2024-12-13T01:32:06.082926376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:32:06.082993 containerd[1444]: time="2024-12-13T01:32:06.082980168Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:32:06.083047 containerd[1444]: time="2024-12-13T01:32:06.083034491Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:32:06.083100 containerd[1444]: time="2024-12-13T01:32:06.083086856Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:32:06.083151 containerd[1444]: time="2024-12-13T01:32:06.083138118Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:32:06.083228 containerd[1444]: time="2024-12-13T01:32:06.083214358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083286 containerd[1444]: time="2024-12-13T01:32:06.083273701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083338 containerd[1444]: time="2024-12-13T01:32:06.083326229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083413 containerd[1444]: time="2024-12-13T01:32:06.083399041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083471 containerd[1444]: time="2024-12-13T01:32:06.083458915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083523 containerd[1444]: time="2024-12-13T01:32:06.083511238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083574 containerd[1444]: time="2024-12-13T01:32:06.083562500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083626 containerd[1444]: time="2024-12-13T01:32:06.083614538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083678 containerd[1444]: time="2024-12-13T01:32:06.083666412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083734 containerd[1444]: time="2024-12-13T01:32:06.083721266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083796 containerd[1444]: time="2024-12-13T01:32:06.083783262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083866 containerd[1444]: time="2024-12-13T01:32:06.083853340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083923 containerd[1444]: time="2024-12-13T01:32:06.083910112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.083980 containerd[1444]: time="2024-12-13T01:32:06.083967741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:32:06.084043 containerd[1444]: time="2024-12-13T01:32:06.084029002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.084095 containerd[1444]: time="2024-12-13T01:32:06.084082673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.084167 containerd[1444]: time="2024-12-13T01:32:06.084153893Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:32:06.084338 containerd[1444]: time="2024-12-13T01:32:06.084324005Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:32:06.084546 containerd[1444]: time="2024-12-13T01:32:06.084527707Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:32:06.084613 containerd[1444]: time="2024-12-13T01:32:06.084599335Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:32:06.084670 containerd[1444]: time="2024-12-13T01:32:06.084656719Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:32:06.084716 containerd[1444]: time="2024-12-13T01:32:06.084703900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.084767 containerd[1444]: time="2024-12-13T01:32:06.084755652Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:32:06.084845 containerd[1444]: time="2024-12-13T01:32:06.084814220Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:32:06.084900 containerd[1444]: time="2024-12-13T01:32:06.084886583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:32:06.085321 containerd[1444]: time="2024-12-13T01:32:06.085261458Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:32:06.085500 containerd[1444]: time="2024-12-13T01:32:06.085482139Z" level=info msg="Connect containerd service" Dec 13 01:32:06.085585 containerd[1444]: time="2024-12-13T01:32:06.085571358Z" level=info msg="using legacy CRI server" Dec 13 01:32:06.085652 containerd[1444]: time="2024-12-13T01:32:06.085638537Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:32:06.087183 containerd[1444]: time="2024-12-13T01:32:06.087152608Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:32:06.087939 containerd[1444]: time="2024-12-13T01:32:06.087911052Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.088205320Z" level=info msg="Start subscribing containerd event" Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.088257562Z" level=info msg="Start recovering state" Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.088318089Z" level=info msg="Start event monitor" Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.088329394Z" level=info msg="Start snapshots syncer" Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.088337108Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.088344210Z" level=info msg="Start streaming server" Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.088391227Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.088431592Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:32:06.090852 containerd[1444]: time="2024-12-13T01:32:06.089568380Z" level=info msg="containerd successfully booted in 0.041872s" Dec 13 01:32:06.088574 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:32:06.214710 tar[1441]: linux-arm64/LICENSE Dec 13 01:32:06.214946 tar[1441]: linux-arm64/README.md Dec 13 01:32:06.225261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:32:06.657907 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:32:06.676498 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:32:06.686058 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:32:06.692083 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:32:06.692252 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:32:06.695632 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:32:06.708403 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:32:06.711068 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:32:06.713222 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:32:06.714528 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:32:07.206846 systemd-networkd[1389]: eth0: Gained IPv6LL Dec 13 01:32:07.209905 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:32:07.211391 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:32:07.225088 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:32:07.227317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:07.229275 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:32:07.244116 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:32:07.244354 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:32:07.245771 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:32:07.250169 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:32:07.702178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:07.703499 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:32:07.705916 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:07.707921 systemd[1]: Startup finished in 559ms (kernel) + 4.483s (initrd) + 3.554s (userspace) = 8.598s. Dec 13 01:32:08.169540 kubelet[1530]: E1213 01:32:08.169396 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:08.172299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:08.172444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:12.114382 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:32:12.115460 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:37392.service - OpenSSH per-connection server daemon (10.0.0.1:37392). Dec 13 01:32:12.165503 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 37392 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:32:12.167101 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:12.175181 systemd-logind[1427]: New session 1 of user core. Dec 13 01:32:12.176311 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:32:12.189135 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:32:12.197328 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:32:12.200093 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:32:12.205287 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:32:12.273913 systemd[1548]: Queued start job for default target default.target. Dec 13 01:32:12.284734 systemd[1548]: Created slice app.slice - User Application Slice. Dec 13 01:32:12.284779 systemd[1548]: Reached target paths.target - Paths. Dec 13 01:32:12.284790 systemd[1548]: Reached target timers.target - Timers. Dec 13 01:32:12.286024 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:32:12.295789 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:32:12.295890 systemd[1548]: Reached target sockets.target - Sockets. Dec 13 01:32:12.295904 systemd[1548]: Reached target basic.target - Basic System. Dec 13 01:32:12.295941 systemd[1548]: Reached target default.target - Main User Target. Dec 13 01:32:12.295968 systemd[1548]: Startup finished in 85ms. Dec 13 01:32:12.296198 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:32:12.297490 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:32:12.356434 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:37394.service - OpenSSH per-connection server daemon (10.0.0.1:37394). Dec 13 01:32:12.394055 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 37394 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:32:12.395287 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:12.400886 systemd-logind[1427]: New session 2 of user core. Dec 13 01:32:12.413025 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:32:12.466306 sshd[1559]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:12.474132 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:37394.service: Deactivated successfully. Dec 13 01:32:12.475465 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:32:12.477939 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:32:12.479029 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:33332.service - OpenSSH per-connection server daemon (10.0.0.1:33332). Dec 13 01:32:12.479814 systemd-logind[1427]: Removed session 2. Dec 13 01:32:12.514946 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 33332 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:32:12.516174 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:12.520117 systemd-logind[1427]: New session 3 of user core. Dec 13 01:32:12.533014 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:32:12.580620 sshd[1566]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:12.589186 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:33332.service: Deactivated successfully. Dec 13 01:32:12.591065 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:32:12.592337 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:32:12.600072 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:33344.service - OpenSSH per-connection server daemon (10.0.0.1:33344). Dec 13 01:32:12.601044 systemd-logind[1427]: Removed session 3. Dec 13 01:32:12.632420 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 33344 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:32:12.633503 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:12.637059 systemd-logind[1427]: New session 4 of user core. Dec 13 01:32:12.642942 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:32:12.693656 sshd[1573]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:12.704935 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:33344.service: Deactivated successfully. Dec 13 01:32:12.706298 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:32:12.707501 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:32:12.708541 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:33354.service - OpenSSH per-connection server daemon (10.0.0.1:33354). Dec 13 01:32:12.709270 systemd-logind[1427]: Removed session 4. Dec 13 01:32:12.743738 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 33354 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:32:12.744851 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:12.748377 systemd-logind[1427]: New session 5 of user core. Dec 13 01:32:12.761948 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:32:12.819043 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:32:12.821411 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:12.836576 sudo[1583]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:12.838128 sshd[1580]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:12.847066 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:33354.service: Deactivated successfully. Dec 13 01:32:12.848436 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:32:12.849638 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:32:12.850792 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:33370.service - OpenSSH per-connection server daemon (10.0.0.1:33370). Dec 13 01:32:12.851664 systemd-logind[1427]: Removed session 5. Dec 13 01:32:12.886387 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 33370 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:32:12.887529 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:12.891029 systemd-logind[1427]: New session 6 of user core. Dec 13 01:32:12.901945 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:32:12.952072 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:32:12.952980 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:12.955853 sudo[1592]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:12.960166 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:32:12.960442 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:12.977213 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:12.978219 auditctl[1595]: No rules Dec 13 01:32:12.978558 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:32:12.978757 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:12.980738 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:13.002398 augenrules[1613]: No rules Dec 13 01:32:13.003525 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:13.004509 sudo[1591]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:13.006018 sshd[1588]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:13.021806 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:33370.service: Deactivated successfully. Dec 13 01:32:13.023464 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:32:13.027007 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:32:13.028090 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:33374.service - OpenSSH per-connection server daemon (10.0.0.1:33374). Dec 13 01:32:13.029872 systemd-logind[1427]: Removed session 6. Dec 13 01:32:13.063942 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 33374 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:32:13.065047 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:13.068350 systemd-logind[1427]: New session 7 of user core. Dec 13 01:32:13.080016 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:32:13.129438 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:32:13.129716 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:13.444107 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:32:13.444298 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:32:13.702148 dockerd[1643]: time="2024-12-13T01:32:13.702015001Z" level=info msg="Starting up" Dec 13 01:32:13.840591 dockerd[1643]: time="2024-12-13T01:32:13.840548932Z" level=info msg="Loading containers: start." Dec 13 01:32:13.921846 kernel: Initializing XFRM netlink socket Dec 13 01:32:13.981368 systemd-networkd[1389]: docker0: Link UP Dec 13 01:32:13.997887 dockerd[1643]: time="2024-12-13T01:32:13.997840868Z" level=info msg="Loading containers: done." Dec 13 01:32:14.011699 dockerd[1643]: time="2024-12-13T01:32:14.011654403Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:32:14.011836 dockerd[1643]: time="2024-12-13T01:32:14.011743381Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:32:14.011872 dockerd[1643]: time="2024-12-13T01:32:14.011860313Z" level=info msg="Daemon has completed initialization" Dec 13 01:32:14.036573 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:32:14.036741 dockerd[1643]: time="2024-12-13T01:32:14.036489866Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:32:14.708258 containerd[1444]: time="2024-12-13T01:32:14.708203167Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:32:15.346304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1612170840.mount: Deactivated successfully. Dec 13 01:32:17.114832 containerd[1444]: time="2024-12-13T01:32:17.114346497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:17.115194 containerd[1444]: time="2024-12-13T01:32:17.114822637Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Dec 13 01:32:17.119327 containerd[1444]: time="2024-12-13T01:32:17.119275273Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:17.122087 containerd[1444]: time="2024-12-13T01:32:17.122049124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:17.124242 containerd[1444]: time="2024-12-13T01:32:17.124193802Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.41594349s" Dec 13 01:32:17.124242 containerd[1444]: time="2024-12-13T01:32:17.124232864Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:32:17.143301 containerd[1444]: time="2024-12-13T01:32:17.143251362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:32:18.423068 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:32:18.436007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:18.524999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:18.528558 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:18.570225 kubelet[1871]: E1213 01:32:18.570179 1871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:18.574134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:18.574281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:18.967951 containerd[1444]: time="2024-12-13T01:32:18.967725615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:18.968811 containerd[1444]: time="2024-12-13T01:32:18.968531263Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Dec 13 01:32:18.969605 containerd[1444]: time="2024-12-13T01:32:18.969535683Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:18.972441 containerd[1444]: time="2024-12-13T01:32:18.972390213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:18.973618 containerd[1444]: time="2024-12-13T01:32:18.973569426Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.830273943s" Dec 13 01:32:18.973618 containerd[1444]: time="2024-12-13T01:32:18.973605975Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:32:18.992540 containerd[1444]: time="2024-12-13T01:32:18.992511736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:32:20.058019 containerd[1444]: time="2024-12-13T01:32:20.057724121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:20.059992 containerd[1444]: time="2024-12-13T01:32:20.059946592Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Dec 13 01:32:20.062392 containerd[1444]: time="2024-12-13T01:32:20.061632586Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:20.065527 containerd[1444]: time="2024-12-13T01:32:20.065476328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:20.066618 containerd[1444]: time="2024-12-13T01:32:20.066526814Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.073979664s" Dec 13 01:32:20.066618 containerd[1444]: time="2024-12-13T01:32:20.066564130Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:32:20.086466 containerd[1444]: time="2024-12-13T01:32:20.086427298Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:32:21.094978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745861594.mount: Deactivated successfully. Dec 13 01:32:21.432049 containerd[1444]: time="2024-12-13T01:32:21.431928176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:21.432786 containerd[1444]: time="2024-12-13T01:32:21.432486705Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Dec 13 01:32:21.433529 containerd[1444]: time="2024-12-13T01:32:21.433465746Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:21.435537 containerd[1444]: time="2024-12-13T01:32:21.435480341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:21.436144 containerd[1444]: time="2024-12-13T01:32:21.436104009Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.349638676s" Dec 13 01:32:21.436194 containerd[1444]: time="2024-12-13T01:32:21.436143236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:32:21.453540 containerd[1444]: time="2024-12-13T01:32:21.453509703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:32:22.157837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121479730.mount: Deactivated successfully. Dec 13 01:32:22.892597 containerd[1444]: time="2024-12-13T01:32:22.892544653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.893090 containerd[1444]: time="2024-12-13T01:32:22.893064458Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 01:32:22.894051 containerd[1444]: time="2024-12-13T01:32:22.894016259Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.897011 containerd[1444]: time="2024-12-13T01:32:22.896948767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.898153 containerd[1444]: time="2024-12-13T01:32:22.898104056Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.444557134s" Dec 13 01:32:22.898153 containerd[1444]: time="2024-12-13T01:32:22.898140423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:32:22.916398 containerd[1444]: time="2024-12-13T01:32:22.916364696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:32:23.349543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117565450.mount: Deactivated successfully. Dec 13 01:32:23.354069 containerd[1444]: time="2024-12-13T01:32:23.354019732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:23.354455 containerd[1444]: time="2024-12-13T01:32:23.354428750Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 01:32:23.355300 containerd[1444]: time="2024-12-13T01:32:23.355257849Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:23.357337 containerd[1444]: time="2024-12-13T01:32:23.357282736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:23.359418 containerd[1444]: time="2024-12-13T01:32:23.358992122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 442.589859ms" Dec 13 01:32:23.359418 containerd[1444]: time="2024-12-13T01:32:23.359030843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:32:23.379186 containerd[1444]: time="2024-12-13T01:32:23.379147278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:32:23.998039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167321878.mount: Deactivated successfully. Dec 13 01:32:26.625840 containerd[1444]: time="2024-12-13T01:32:26.625778576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:26.627155 containerd[1444]: time="2024-12-13T01:32:26.627119982Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Dec 13 01:32:26.627849 containerd[1444]: time="2024-12-13T01:32:26.627790925Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:26.631395 containerd[1444]: time="2024-12-13T01:32:26.630793267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:26.632119 containerd[1444]: time="2024-12-13T01:32:26.632075510Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.25289084s" Dec 13 01:32:26.632119 containerd[1444]: time="2024-12-13T01:32:26.632115566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:32:28.824649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:32:28.834977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:28.962081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:28.966103 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:29.002493 kubelet[2095]: E1213 01:32:29.002436 2095 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:29.004862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:29.004995 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:32.788798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:32.796006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:32.809165 systemd[1]: Reloading requested from client PID 2112 ('systemctl') (unit session-7.scope)... Dec 13 01:32:32.809180 systemd[1]: Reloading... Dec 13 01:32:32.876892 zram_generator::config[2154]: No configuration found. Dec 13 01:32:32.982176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:33.033182 systemd[1]: Reloading finished in 223 ms. Dec 13 01:32:33.067994 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:32:33.068058 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:32:33.068246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:33.070160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:33.156688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:33.160068 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:33.198387 kubelet[2197]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:33.198387 kubelet[2197]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:33.198387 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:33.198673 kubelet[2197]: I1213 01:32:33.198427 2197 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:34.357414 kubelet[2197]: I1213 01:32:34.357376 2197 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:32:34.357414 kubelet[2197]: I1213 01:32:34.357405 2197 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:34.357758 kubelet[2197]: I1213 01:32:34.357604 2197 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:32:34.380424 kubelet[2197]: I1213 01:32:34.380390 2197 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:34.380871 kubelet[2197]: E1213 01:32:34.380850 2197 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.389339 kubelet[2197]: I1213 01:32:34.389312 2197 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:34.390345 kubelet[2197]: I1213 01:32:34.390308 2197 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:34.390516 kubelet[2197]: I1213 01:32:34.390493 2197 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:34.390589 kubelet[2197]: I1213 01:32:34.390518 2197 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:34.390589 kubelet[2197]: I1213 01:32:34.390537 2197 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:34.391605 kubelet[2197]: I1213 01:32:34.391573 2197 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:34.395484 kubelet[2197]: I1213 01:32:34.395454 2197 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:32:34.395484 kubelet[2197]: I1213 01:32:34.395481 2197 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:34.395541 kubelet[2197]: I1213 01:32:34.395504 2197 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:34.395541 kubelet[2197]: I1213 01:32:34.395519 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:34.396171 kubelet[2197]: W1213 01:32:34.396102 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.396171 kubelet[2197]: E1213 01:32:34.396149 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.396377 kubelet[2197]: W1213 01:32:34.396344 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.396466 kubelet[2197]: E1213 01:32:34.396441 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.397696 kubelet[2197]: I1213 01:32:34.397676 2197 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:34.398188 kubelet[2197]: I1213 01:32:34.398158 2197 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:34.398696 kubelet[2197]: W1213 01:32:34.398675 2197 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:32:34.399456 kubelet[2197]: I1213 01:32:34.399437 2197 server.go:1256] "Started kubelet" Dec 13 01:32:34.400179 kubelet[2197]: I1213 01:32:34.399680 2197 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:34.400569 kubelet[2197]: I1213 01:32:34.399807 2197 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:34.400770 kubelet[2197]: I1213 01:32:34.400750 2197 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:34.401843 kubelet[2197]: I1213 01:32:34.401137 2197 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:32:34.403608 kubelet[2197]: I1213 01:32:34.403399 2197 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:34.403608 kubelet[2197]: I1213 01:32:34.403552 2197 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:34.403692 kubelet[2197]: I1213 01:32:34.403637 2197 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:32:34.403692 kubelet[2197]: I1213 01:32:34.403680 2197 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:32:34.404421 kubelet[2197]: W1213 01:32:34.403924 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.404421 kubelet[2197]: E1213 01:32:34.403968 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.404421 kubelet[2197]: E1213 01:32:34.404039 2197 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:34.404421 kubelet[2197]: E1213 01:32:34.404282 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Dec 13 01:32:34.404840 kubelet[2197]: I1213 01:32:34.404823 2197 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:34.405149 kubelet[2197]: I1213 01:32:34.405009 2197 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:34.406410 kubelet[2197]: I1213 01:32:34.406396 2197 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:34.408242 kubelet[2197]: E1213 01:32:34.408195 2197 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109885337b0daa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:32:34.399415722 +0000 UTC m=+1.236004850,LastTimestamp:2024-12-13 01:32:34.399415722 +0000 UTC m=+1.236004850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:32:34.417087 kubelet[2197]: I1213 01:32:34.417068 2197 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:34.417423 kubelet[2197]: I1213 01:32:34.417225 2197 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:34.417423 kubelet[2197]: I1213 01:32:34.417247 2197 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:34.417956 kubelet[2197]: I1213 01:32:34.417931 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:34.418842 kubelet[2197]: I1213 01:32:34.418800 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:34.418842 kubelet[2197]: I1213 01:32:34.418826 2197 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:34.419067 kubelet[2197]: I1213 01:32:34.418932 2197 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:32:34.419067 kubelet[2197]: E1213 01:32:34.418975 2197 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:34.419321 kubelet[2197]: W1213 01:32:34.419289 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.419321 kubelet[2197]: E1213 01:32:34.419321 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:34.480505 kubelet[2197]: I1213 01:32:34.480391 2197 policy_none.go:49] "None policy: Start" Dec 13 01:32:34.481225 kubelet[2197]: I1213 01:32:34.481198 2197 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:34.481291 kubelet[2197]: I1213 01:32:34.481244 2197 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:34.485722 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:32:34.504029 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:32:34.504640 kubelet[2197]: I1213 01:32:34.504617 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:34.506545 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:32:34.507150 kubelet[2197]: E1213 01:32:34.507127 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Dec 13 01:32:34.519293 kubelet[2197]: E1213 01:32:34.519262 2197 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:32:34.525676 kubelet[2197]: I1213 01:32:34.525554 2197 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:34.526283 kubelet[2197]: I1213 01:32:34.525806 2197 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:34.527601 kubelet[2197]: E1213 01:32:34.527570 2197 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:32:34.604798 kubelet[2197]: E1213 01:32:34.604761 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Dec 13 01:32:34.708377 kubelet[2197]: I1213 01:32:34.708264 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:34.708576 kubelet[2197]: E1213 01:32:34.708539 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Dec 13 01:32:34.720291 kubelet[2197]: I1213 01:32:34.720211 2197 topology_manager.go:215] "Topology Admit Handler" podUID="dcb90f33473ac0dcf0e48c76a29df5e2" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:32:34.721266 kubelet[2197]: I1213 01:32:34.721176 2197 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:32:34.722114 kubelet[2197]: I1213 01:32:34.722086 2197 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:32:34.726966 systemd[1]: Created slice kubepods-burstable-poddcb90f33473ac0dcf0e48c76a29df5e2.slice - libcontainer container kubepods-burstable-poddcb90f33473ac0dcf0e48c76a29df5e2.slice. Dec 13 01:32:34.754239 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 01:32:34.770021 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 01:32:34.806092 kubelet[2197]: I1213 01:32:34.806053 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcb90f33473ac0dcf0e48c76a29df5e2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcb90f33473ac0dcf0e48c76a29df5e2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:34.806092 kubelet[2197]: I1213 01:32:34.806095 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:34.806196 kubelet[2197]: I1213 01:32:34.806118 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:34.806196 kubelet[2197]: I1213 01:32:34.806141 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:34.806196 kubelet[2197]: I1213 01:32:34.806161 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcb90f33473ac0dcf0e48c76a29df5e2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcb90f33473ac0dcf0e48c76a29df5e2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:34.806196 kubelet[2197]: I1213 01:32:34.806183 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcb90f33473ac0dcf0e48c76a29df5e2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dcb90f33473ac0dcf0e48c76a29df5e2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:34.806196 kubelet[2197]: I1213 01:32:34.806200 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:34.806303 kubelet[2197]: I1213 01:32:34.806220 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:34.806303 kubelet[2197]: I1213 01:32:34.806246 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:32:35.006279 kubelet[2197]: E1213 01:32:35.006194 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Dec 13 01:32:35.054457 kubelet[2197]: E1213 01:32:35.054403 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:35.055099 containerd[1444]: time="2024-12-13T01:32:35.055050285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dcb90f33473ac0dcf0e48c76a29df5e2,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:35.068392 kubelet[2197]: E1213 01:32:35.068356 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:35.068786 containerd[1444]: time="2024-12-13T01:32:35.068744360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:35.072030 kubelet[2197]: E1213 01:32:35.071996 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:35.072440 containerd[1444]: time="2024-12-13T01:32:35.072393464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:35.109915 kubelet[2197]: I1213 01:32:35.109891 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:35.110245 kubelet[2197]: E1213 01:32:35.110215 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Dec 13 01:32:35.462882 kubelet[2197]: W1213 01:32:35.462701 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:35.462882 kubelet[2197]: E1213 01:32:35.462794 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:35.551590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958231729.mount: Deactivated successfully. Dec 13 01:32:35.557209 containerd[1444]: time="2024-12-13T01:32:35.557167494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:35.558235 containerd[1444]: time="2024-12-13T01:32:35.558199611Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:35.559122 containerd[1444]: time="2024-12-13T01:32:35.559091909Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:35.559912 containerd[1444]: time="2024-12-13T01:32:35.559882723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:35.560516 containerd[1444]: time="2024-12-13T01:32:35.560487019Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:35.561402 containerd[1444]: time="2024-12-13T01:32:35.561377876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:32:35.561940 containerd[1444]: time="2024-12-13T01:32:35.561922106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:35.563335 containerd[1444]: time="2024-12-13T01:32:35.563297928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:35.564869 containerd[1444]: time="2024-12-13T01:32:35.564828936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.696657ms" Dec 13 01:32:35.566365 containerd[1444]: time="2024-12-13T01:32:35.566334453Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.869719ms" Dec 13 01:32:35.570330 containerd[1444]: time="2024-12-13T01:32:35.570296330Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.454288ms" Dec 13 01:32:35.639223 kubelet[2197]: W1213 01:32:35.638993 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:35.639223 kubelet[2197]: E1213 01:32:35.639035 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:35.697141 containerd[1444]: time="2024-12-13T01:32:35.697015236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:35.697141 containerd[1444]: time="2024-12-13T01:32:35.697065777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:35.697141 containerd[1444]: time="2024-12-13T01:32:35.697080784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:35.697757 containerd[1444]: time="2024-12-13T01:32:35.697239971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:35.699052 containerd[1444]: time="2024-12-13T01:32:35.698843730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:35.699052 containerd[1444]: time="2024-12-13T01:32:35.698845130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:35.699052 containerd[1444]: time="2024-12-13T01:32:35.698889709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:35.699052 containerd[1444]: time="2024-12-13T01:32:35.698900354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:35.699210 containerd[1444]: time="2024-12-13T01:32:35.699118366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:35.699210 containerd[1444]: time="2024-12-13T01:32:35.699134213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:35.699210 containerd[1444]: time="2024-12-13T01:32:35.698987070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:35.699358 containerd[1444]: time="2024-12-13T01:32:35.699264228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:35.720043 systemd[1]: Started cri-containerd-6f082905150b3715157efc539a932d8da27693728a95a4ea346e516ad0964ba3.scope - libcontainer container 6f082905150b3715157efc539a932d8da27693728a95a4ea346e516ad0964ba3. Dec 13 01:32:35.721178 systemd[1]: Started cri-containerd-7b4ce7dca77b26a0f982db1b2d23b257c570e0f6dcb98f44802bf936393e1470.scope - libcontainer container 7b4ce7dca77b26a0f982db1b2d23b257c570e0f6dcb98f44802bf936393e1470. Dec 13 01:32:35.723108 systemd[1]: Started cri-containerd-ffb2b1142dcfd0da3b8dc8c70e48765aa478e874d9277d7e16460338b2dc66c0.scope - libcontainer container ffb2b1142dcfd0da3b8dc8c70e48765aa478e874d9277d7e16460338b2dc66c0. Dec 13 01:32:35.749325 containerd[1444]: time="2024-12-13T01:32:35.748868499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f082905150b3715157efc539a932d8da27693728a95a4ea346e516ad0964ba3\"" Dec 13 01:32:35.750152 kubelet[2197]: E1213 01:32:35.750119 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:35.752745 containerd[1444]: time="2024-12-13T01:32:35.752709405Z" level=info msg="CreateContainer within sandbox \"6f082905150b3715157efc539a932d8da27693728a95a4ea346e516ad0964ba3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:32:35.755126 containerd[1444]: time="2024-12-13T01:32:35.755066322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dcb90f33473ac0dcf0e48c76a29df5e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b4ce7dca77b26a0f982db1b2d23b257c570e0f6dcb98f44802bf936393e1470\"" Dec 13 01:32:35.755738 kubelet[2197]: E1213 01:32:35.755704 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:35.757020 containerd[1444]: time="2024-12-13T01:32:35.756993858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffb2b1142dcfd0da3b8dc8c70e48765aa478e874d9277d7e16460338b2dc66c0\"" Dec 13 01:32:35.757713 kubelet[2197]: E1213 01:32:35.757675 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:35.758380 containerd[1444]: time="2024-12-13T01:32:35.758160912Z" level=info msg="CreateContainer within sandbox \"7b4ce7dca77b26a0f982db1b2d23b257c570e0f6dcb98f44802bf936393e1470\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:32:35.759258 containerd[1444]: time="2024-12-13T01:32:35.759202593Z" level=info msg="CreateContainer within sandbox \"ffb2b1142dcfd0da3b8dc8c70e48765aa478e874d9277d7e16460338b2dc66c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:32:35.768467 containerd[1444]: time="2024-12-13T01:32:35.768435620Z" level=info msg="CreateContainer within sandbox \"6f082905150b3715157efc539a932d8da27693728a95a4ea346e516ad0964ba3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"88db9adeeb213e8d14fb62301baa84a0af3b3e3820026320414c1b8a0c3ed9a6\"" Dec 13 01:32:35.769206 containerd[1444]: time="2024-12-13T01:32:35.769178854Z" level=info msg="StartContainer for \"88db9adeeb213e8d14fb62301baa84a0af3b3e3820026320414c1b8a0c3ed9a6\"" Dec 13 01:32:35.773085 containerd[1444]: time="2024-12-13T01:32:35.772989307Z" level=info msg="CreateContainer within sandbox \"ffb2b1142dcfd0da3b8dc8c70e48765aa478e874d9277d7e16460338b2dc66c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e934b307ed6bc3d999ca93acbbda8bb6253ea1093d087a3e9809847a7bfe802\"" Dec 13 01:32:35.773399 containerd[1444]: time="2024-12-13T01:32:35.773376711Z" level=info msg="StartContainer for \"4e934b307ed6bc3d999ca93acbbda8bb6253ea1093d087a3e9809847a7bfe802\"" Dec 13 01:32:35.773621 containerd[1444]: time="2024-12-13T01:32:35.773593042Z" level=info msg="CreateContainer within sandbox \"7b4ce7dca77b26a0f982db1b2d23b257c570e0f6dcb98f44802bf936393e1470\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e16fb0d385c10bcaa597aa38ffc98d66496b22362f4bcded2bab6a72083427f0\"" Dec 13 01:32:35.773925 containerd[1444]: time="2024-12-13T01:32:35.773897611Z" level=info msg="StartContainer for \"e16fb0d385c10bcaa597aa38ffc98d66496b22362f4bcded2bab6a72083427f0\"" Dec 13 01:32:35.795967 systemd[1]: Started cri-containerd-88db9adeeb213e8d14fb62301baa84a0af3b3e3820026320414c1b8a0c3ed9a6.scope - libcontainer container 88db9adeeb213e8d14fb62301baa84a0af3b3e3820026320414c1b8a0c3ed9a6. Dec 13 01:32:35.799832 systemd[1]: Started cri-containerd-4e934b307ed6bc3d999ca93acbbda8bb6253ea1093d087a3e9809847a7bfe802.scope - libcontainer container 4e934b307ed6bc3d999ca93acbbda8bb6253ea1093d087a3e9809847a7bfe802. Dec 13 01:32:35.801035 systemd[1]: Started cri-containerd-e16fb0d385c10bcaa597aa38ffc98d66496b22362f4bcded2bab6a72083427f0.scope - libcontainer container e16fb0d385c10bcaa597aa38ffc98d66496b22362f4bcded2bab6a72083427f0. Dec 13 01:32:35.807478 kubelet[2197]: E1213 01:32:35.807449 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" Dec 13 01:32:35.833412 containerd[1444]: time="2024-12-13T01:32:35.832013805Z" level=info msg="StartContainer for \"88db9adeeb213e8d14fb62301baa84a0af3b3e3820026320414c1b8a0c3ed9a6\" returns successfully" Dec 13 01:32:35.858707 containerd[1444]: time="2024-12-13T01:32:35.855271608Z" level=info msg="StartContainer for \"e16fb0d385c10bcaa597aa38ffc98d66496b22362f4bcded2bab6a72083427f0\" returns successfully" Dec 13 01:32:35.858707 containerd[1444]: time="2024-12-13T01:32:35.855271528Z" level=info msg="StartContainer for \"4e934b307ed6bc3d999ca93acbbda8bb6253ea1093d087a3e9809847a7bfe802\" returns successfully" Dec 13 01:32:35.885737 kubelet[2197]: W1213 01:32:35.885687 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:35.885799 kubelet[2197]: E1213 01:32:35.885749 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:35.916733 kubelet[2197]: I1213 01:32:35.916704 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:35.917013 kubelet[2197]: E1213 01:32:35.916991 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Dec 13 01:32:35.959511 kubelet[2197]: W1213 01:32:35.959433 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:35.959511 kubelet[2197]: E1213 01:32:35.959495 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:32:36.430442 kubelet[2197]: E1213 01:32:36.430169 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:36.430442 kubelet[2197]: E1213 01:32:36.430248 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:36.432464 kubelet[2197]: E1213 01:32:36.432391 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:37.435917 kubelet[2197]: E1213 01:32:37.435867 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:37.520058 kubelet[2197]: I1213 01:32:37.519684 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:37.703060 kubelet[2197]: I1213 01:32:37.701196 2197 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:32:38.398415 kubelet[2197]: I1213 01:32:38.398353 2197 apiserver.go:52] "Watching apiserver" Dec 13 01:32:38.404513 kubelet[2197]: I1213 01:32:38.404489 2197 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:40.783008 systemd[1]: Reloading requested from client PID 2476 ('systemctl') (unit session-7.scope)... Dec 13 01:32:40.783023 systemd[1]: Reloading... Dec 13 01:32:40.842876 zram_generator::config[2518]: No configuration found. Dec 13 01:32:40.918424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:40.981479 systemd[1]: Reloading finished in 198 ms. Dec 13 01:32:41.010011 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:41.019126 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:32:41.019299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:41.019343 systemd[1]: kubelet.service: Consumed 1.601s CPU time, 116.8M memory peak, 0B memory swap peak. Dec 13 01:32:41.031107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:41.117144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:41.121125 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:41.164320 kubelet[2557]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:41.164320 kubelet[2557]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:41.164320 kubelet[2557]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:41.164905 kubelet[2557]: I1213 01:32:41.164359 2557 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:41.169379 kubelet[2557]: I1213 01:32:41.169343 2557 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:32:41.169564 kubelet[2557]: I1213 01:32:41.169460 2557 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:41.169838 kubelet[2557]: I1213 01:32:41.169798 2557 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:32:41.171401 kubelet[2557]: I1213 01:32:41.171377 2557 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:32:41.174929 kubelet[2557]: I1213 01:32:41.174622 2557 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:41.188834 kubelet[2557]: I1213 01:32:41.186214 2557 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:41.192864 kubelet[2557]: I1213 01:32:41.189259 2557 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:41.192864 kubelet[2557]: I1213 01:32:41.189425 2557 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:41.192864 kubelet[2557]: I1213 01:32:41.189448 2557 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:41.192864 kubelet[2557]: I1213 01:32:41.189456 2557 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:41.192864 kubelet[2557]: I1213 01:32:41.189488 2557 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:41.193178 kubelet[2557]: I1213 01:32:41.193158 2557 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:32:41.193688 kubelet[2557]: I1213 01:32:41.193658 2557 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:41.199851 kubelet[2557]: I1213 01:32:41.193804 2557 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:41.200269 kubelet[2557]: I1213 01:32:41.199977 2557 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:41.201133 kubelet[2557]: I1213 01:32:41.201111 2557 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:41.201399 kubelet[2557]: I1213 01:32:41.201384 2557 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:41.201875 kubelet[2557]: I1213 01:32:41.201856 2557 server.go:1256] "Started kubelet" Dec 13 01:32:41.203751 kubelet[2557]: I1213 01:32:41.203703 2557 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:41.205152 kubelet[2557]: I1213 01:32:41.205120 2557 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:41.205220 kubelet[2557]: I1213 01:32:41.205197 2557 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:41.208818 kubelet[2557]: I1213 01:32:41.206081 2557 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:32:41.208818 kubelet[2557]: I1213 01:32:41.207962 2557 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:41.208818 kubelet[2557]: I1213 01:32:41.208092 2557 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:41.209110 kubelet[2557]: I1213 01:32:41.209088 2557 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:32:41.209242 kubelet[2557]: I1213 01:32:41.209226 2557 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:32:41.209798 kubelet[2557]: E1213 01:32:41.209780 2557 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:32:41.220764 kubelet[2557]: I1213 01:32:41.220053 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:41.221418 kubelet[2557]: I1213 01:32:41.221105 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:41.221418 kubelet[2557]: I1213 01:32:41.221136 2557 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:41.221418 kubelet[2557]: I1213 01:32:41.221151 2557 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:32:41.221418 kubelet[2557]: E1213 01:32:41.221201 2557 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:41.227717 kubelet[2557]: I1213 01:32:41.227689 2557 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:41.228224 kubelet[2557]: I1213 01:32:41.228189 2557 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:41.231789 kubelet[2557]: I1213 01:32:41.231598 2557 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:41.234634 kubelet[2557]: E1213 01:32:41.234593 2557 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:41.267272 kubelet[2557]: I1213 01:32:41.267243 2557 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:41.267272 kubelet[2557]: I1213 01:32:41.267269 2557 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:41.267400 kubelet[2557]: I1213 01:32:41.267288 2557 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:41.267447 kubelet[2557]: I1213 01:32:41.267431 2557 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:32:41.267473 kubelet[2557]: I1213 01:32:41.267457 2557 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:32:41.267473 kubelet[2557]: I1213 01:32:41.267464 2557 policy_none.go:49] "None policy: Start" Dec 13 01:32:41.268068 kubelet[2557]: I1213 01:32:41.268048 2557 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:41.268135 kubelet[2557]: I1213 01:32:41.268075 2557 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:41.268225 kubelet[2557]: I1213 01:32:41.268210 2557 state_mem.go:75] "Updated machine memory state" Dec 13 01:32:41.273443 kubelet[2557]: I1213 01:32:41.273286 2557 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:41.273535 kubelet[2557]: I1213 01:32:41.273513 2557 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:41.314854 kubelet[2557]: I1213 01:32:41.313646 2557 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:41.320628 kubelet[2557]: I1213 01:32:41.320600 2557 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:32:41.320748 kubelet[2557]: I1213 01:32:41.320681 2557 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:32:41.321655 kubelet[2557]: I1213 01:32:41.321637 2557 topology_manager.go:215] "Topology Admit Handler" podUID="dcb90f33473ac0dcf0e48c76a29df5e2" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:32:41.322455 kubelet[2557]: I1213 01:32:41.321727 2557 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:32:41.322455 kubelet[2557]: I1213 01:32:41.321787 2557 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:32:41.409950 kubelet[2557]: I1213 01:32:41.409849 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:41.411249 kubelet[2557]: I1213 01:32:41.411211 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:41.411323 kubelet[2557]: I1213 01:32:41.411272 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:41.411323 kubelet[2557]: I1213 01:32:41.411301 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:32:41.411375 kubelet[2557]: I1213 01:32:41.411323 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcb90f33473ac0dcf0e48c76a29df5e2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcb90f33473ac0dcf0e48c76a29df5e2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:41.411375 kubelet[2557]: I1213 01:32:41.411345 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:41.411375 kubelet[2557]: I1213 01:32:41.411365 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:41.411440 kubelet[2557]: I1213 01:32:41.411383 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcb90f33473ac0dcf0e48c76a29df5e2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcb90f33473ac0dcf0e48c76a29df5e2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:41.411440 kubelet[2557]: I1213 01:32:41.411403 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcb90f33473ac0dcf0e48c76a29df5e2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dcb90f33473ac0dcf0e48c76a29df5e2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:41.634801 kubelet[2557]: E1213 01:32:41.634745 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:41.635136 kubelet[2557]: E1213 01:32:41.635110 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:41.635235 kubelet[2557]: E1213 01:32:41.635209 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:42.200594 kubelet[2557]: I1213 01:32:42.200438 2557 apiserver.go:52] "Watching apiserver" Dec 13 01:32:42.210017 kubelet[2557]: I1213 01:32:42.209979 2557 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:42.251835 kubelet[2557]: E1213 01:32:42.250399 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:42.260820 kubelet[2557]: E1213 01:32:42.260767 2557 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:32:42.261089 kubelet[2557]: E1213 01:32:42.261065 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:42.264528 kubelet[2557]: E1213 01:32:42.264485 2557 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:42.266191 kubelet[2557]: E1213 01:32:42.266158 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:42.284622 kubelet[2557]: I1213 01:32:42.284503 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.284460892 podStartE2EDuration="1.284460892s" podCreationTimestamp="2024-12-13 01:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:42.283469573 +0000 UTC m=+1.158785803" watchObservedRunningTime="2024-12-13 01:32:42.284460892 +0000 UTC m=+1.159777122" Dec 13 01:32:42.296835 kubelet[2557]: I1213 01:32:42.296780 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.296744158 podStartE2EDuration="1.296744158s" podCreationTimestamp="2024-12-13 01:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:42.296561321 +0000 UTC m=+1.171877511" watchObservedRunningTime="2024-12-13 01:32:42.296744158 +0000 UTC m=+1.172060388" Dec 13 01:32:42.303815 kubelet[2557]: I1213 01:32:42.303777 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.303747803 podStartE2EDuration="1.303747803s" podCreationTimestamp="2024-12-13 01:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:42.303740642 +0000 UTC m=+1.179056872" watchObservedRunningTime="2024-12-13 01:32:42.303747803 +0000 UTC m=+1.179063993" Dec 13 01:32:43.251903 kubelet[2557]: E1213 01:32:43.251830 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:43.252212 kubelet[2557]: E1213 01:32:43.252142 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:44.253203 kubelet[2557]: E1213 01:32:44.253148 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:45.286935 sudo[1624]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:45.290601 sshd[1621]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:45.293935 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:33374.service: Deactivated successfully. Dec 13 01:32:45.295460 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:32:45.295633 systemd[1]: session-7.scope: Consumed 8.271s CPU time, 187.8M memory peak, 0B memory swap peak. Dec 13 01:32:45.296023 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:32:45.296977 systemd-logind[1427]: Removed session 7. Dec 13 01:32:49.789323 kubelet[2557]: E1213 01:32:49.789292 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:50.271928 kubelet[2557]: E1213 01:32:50.271776 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:51.116405 update_engine[1431]: I20241213 01:32:51.116338 1431 update_attempter.cc:509] Updating boot flags... Dec 13 01:32:51.142849 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2654) Dec 13 01:32:51.173556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2656) Dec 13 01:32:51.970528 kubelet[2557]: E1213 01:32:51.970483 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:53.921380 kubelet[2557]: E1213 01:32:53.921319 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:55.348797 kubelet[2557]: I1213 01:32:55.348749 2557 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:32:55.366326 containerd[1444]: time="2024-12-13T01:32:55.365486232Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:32:55.367037 kubelet[2557]: I1213 01:32:55.366603 2557 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:55.629637 kubelet[2557]: I1213 01:32:55.628499 2557 topology_manager.go:215] "Topology Admit Handler" podUID="c1698733-7151-4a46-aee1-e041e39c5630" podNamespace="kube-system" podName="kube-proxy-mxvp9" Dec 13 01:32:55.639801 systemd[1]: Created slice kubepods-besteffort-podc1698733_7151_4a46_aee1_e041e39c5630.slice - libcontainer container kubepods-besteffort-podc1698733_7151_4a46_aee1_e041e39c5630.slice. Dec 13 01:32:55.711305 kubelet[2557]: I1213 01:32:55.711265 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1698733-7151-4a46-aee1-e041e39c5630-xtables-lock\") pod \"kube-proxy-mxvp9\" (UID: \"c1698733-7151-4a46-aee1-e041e39c5630\") " pod="kube-system/kube-proxy-mxvp9" Dec 13 01:32:55.711305 kubelet[2557]: I1213 01:32:55.711311 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmbgf\" (UniqueName: \"kubernetes.io/projected/c1698733-7151-4a46-aee1-e041e39c5630-kube-api-access-bmbgf\") pod \"kube-proxy-mxvp9\" (UID: \"c1698733-7151-4a46-aee1-e041e39c5630\") " pod="kube-system/kube-proxy-mxvp9" Dec 13 01:32:55.711441 kubelet[2557]: I1213 01:32:55.711335 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1698733-7151-4a46-aee1-e041e39c5630-kube-proxy\") pod \"kube-proxy-mxvp9\" (UID: \"c1698733-7151-4a46-aee1-e041e39c5630\") " pod="kube-system/kube-proxy-mxvp9" Dec 13 01:32:55.711441 kubelet[2557]: I1213 01:32:55.711354 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1698733-7151-4a46-aee1-e041e39c5630-lib-modules\") pod \"kube-proxy-mxvp9\" (UID: \"c1698733-7151-4a46-aee1-e041e39c5630\") " pod="kube-system/kube-proxy-mxvp9" Dec 13 01:32:55.824053 kubelet[2557]: E1213 01:32:55.824011 2557 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:32:55.824053 kubelet[2557]: E1213 01:32:55.824055 2557 projected.go:200] Error preparing data for projected volume kube-api-access-bmbgf for pod kube-system/kube-proxy-mxvp9: configmap "kube-root-ca.crt" not found Dec 13 01:32:55.824240 kubelet[2557]: E1213 01:32:55.824119 2557 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1698733-7151-4a46-aee1-e041e39c5630-kube-api-access-bmbgf podName:c1698733-7151-4a46-aee1-e041e39c5630 nodeName:}" failed. No retries permitted until 2024-12-13 01:32:56.324100497 +0000 UTC m=+15.199416727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bmbgf" (UniqueName: "kubernetes.io/projected/c1698733-7151-4a46-aee1-e041e39c5630-kube-api-access-bmbgf") pod "kube-proxy-mxvp9" (UID: "c1698733-7151-4a46-aee1-e041e39c5630") : configmap "kube-root-ca.crt" not found Dec 13 01:32:56.141799 kubelet[2557]: I1213 01:32:56.141235 2557 topology_manager.go:215] "Topology Admit Handler" podUID="32ce013f-c1a3-4947-a8bc-f5b30c4b13d0" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-54qnj" Dec 13 01:32:56.150599 systemd[1]: Created slice kubepods-besteffort-pod32ce013f_c1a3_4947_a8bc_f5b30c4b13d0.slice - libcontainer container kubepods-besteffort-pod32ce013f_c1a3_4947_a8bc_f5b30c4b13d0.slice. Dec 13 01:32:56.215238 kubelet[2557]: I1213 01:32:56.215153 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32ce013f-c1a3-4947-a8bc-f5b30c4b13d0-var-lib-calico\") pod \"tigera-operator-c7ccbd65-54qnj\" (UID: \"32ce013f-c1a3-4947-a8bc-f5b30c4b13d0\") " pod="tigera-operator/tigera-operator-c7ccbd65-54qnj" Dec 13 01:32:56.215238 kubelet[2557]: I1213 01:32:56.215196 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn28k\" (UniqueName: \"kubernetes.io/projected/32ce013f-c1a3-4947-a8bc-f5b30c4b13d0-kube-api-access-nn28k\") pod \"tigera-operator-c7ccbd65-54qnj\" (UID: \"32ce013f-c1a3-4947-a8bc-f5b30c4b13d0\") " pod="tigera-operator/tigera-operator-c7ccbd65-54qnj" Dec 13 01:32:56.456276 containerd[1444]: time="2024-12-13T01:32:56.456229202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-54qnj,Uid:32ce013f-c1a3-4947-a8bc-f5b30c4b13d0,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:32:56.476191 containerd[1444]: time="2024-12-13T01:32:56.475898049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:56.476191 containerd[1444]: time="2024-12-13T01:32:56.475961375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:56.476191 containerd[1444]: time="2024-12-13T01:32:56.475976057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:56.476191 containerd[1444]: time="2024-12-13T01:32:56.476054945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:56.498081 systemd[1]: Started cri-containerd-f19a8316744b7b524478f901eecf1652a04e20e7e8b2bdc939b2dcae12760ea8.scope - libcontainer container f19a8316744b7b524478f901eecf1652a04e20e7e8b2bdc939b2dcae12760ea8. Dec 13 01:32:56.526152 containerd[1444]: time="2024-12-13T01:32:56.526114408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-54qnj,Uid:32ce013f-c1a3-4947-a8bc-f5b30c4b13d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f19a8316744b7b524478f901eecf1652a04e20e7e8b2bdc939b2dcae12760ea8\"" Dec 13 01:32:56.536071 containerd[1444]: time="2024-12-13T01:32:56.535962173Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:32:56.559149 kubelet[2557]: E1213 01:32:56.559123 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:56.559925 containerd[1444]: time="2024-12-13T01:32:56.559591928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxvp9,Uid:c1698733-7151-4a46-aee1-e041e39c5630,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:56.576762 containerd[1444]: time="2024-12-13T01:32:56.576631357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:56.577196 containerd[1444]: time="2024-12-13T01:32:56.576765850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:56.577259 containerd[1444]: time="2024-12-13T01:32:56.577203653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:56.577334 containerd[1444]: time="2024-12-13T01:32:56.577302383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:56.594968 systemd[1]: Started cri-containerd-437345213cfa7fecc8a30ce72767e29d79912cc0468f1aa08dd225b310fbb889.scope - libcontainer container 437345213cfa7fecc8a30ce72767e29d79912cc0468f1aa08dd225b310fbb889. Dec 13 01:32:56.613419 containerd[1444]: time="2024-12-13T01:32:56.613385437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxvp9,Uid:c1698733-7151-4a46-aee1-e041e39c5630,Namespace:kube-system,Attempt:0,} returns sandbox id \"437345213cfa7fecc8a30ce72767e29d79912cc0468f1aa08dd225b310fbb889\"" Dec 13 01:32:56.620730 kubelet[2557]: E1213 01:32:56.620583 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:56.626136 containerd[1444]: time="2024-12-13T01:32:56.626104043Z" level=info msg="CreateContainer within sandbox \"437345213cfa7fecc8a30ce72767e29d79912cc0468f1aa08dd225b310fbb889\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:56.636515 containerd[1444]: time="2024-12-13T01:32:56.636414733Z" level=info msg="CreateContainer within sandbox \"437345213cfa7fecc8a30ce72767e29d79912cc0468f1aa08dd225b310fbb889\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24086a15d34c26edf8d6151688975d3d76fbcbb3e9a1c64b8d246e61e3b5072b\"" Dec 13 01:32:56.638007 containerd[1444]: time="2024-12-13T01:32:56.637970086Z" level=info msg="StartContainer for \"24086a15d34c26edf8d6151688975d3d76fbcbb3e9a1c64b8d246e61e3b5072b\"" Dec 13 01:32:56.659022 systemd[1]: Started cri-containerd-24086a15d34c26edf8d6151688975d3d76fbcbb3e9a1c64b8d246e61e3b5072b.scope - libcontainer container 24086a15d34c26edf8d6151688975d3d76fbcbb3e9a1c64b8d246e61e3b5072b. Dec 13 01:32:56.684515 containerd[1444]: time="2024-12-13T01:32:56.683915266Z" level=info msg="StartContainer for \"24086a15d34c26edf8d6151688975d3d76fbcbb3e9a1c64b8d246e61e3b5072b\" returns successfully" Dec 13 01:32:57.287303 kubelet[2557]: E1213 01:32:57.287266 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:57.305612 kubelet[2557]: I1213 01:32:57.305557 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mxvp9" podStartSLOduration=2.304355197 podStartE2EDuration="2.304355197s" podCreationTimestamp="2024-12-13 01:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:57.304141497 +0000 UTC m=+16.179457727" watchObservedRunningTime="2024-12-13 01:32:57.304355197 +0000 UTC m=+16.179671427" Dec 13 01:32:57.331424 systemd[1]: run-containerd-runc-k8s.io-f19a8316744b7b524478f901eecf1652a04e20e7e8b2bdc939b2dcae12760ea8-runc.aEDZgy.mount: Deactivated successfully. Dec 13 01:32:58.730255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438783702.mount: Deactivated successfully. Dec 13 01:33:01.430893 containerd[1444]: time="2024-12-13T01:33:01.430799483Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.431941 containerd[1444]: time="2024-12-13T01:33:01.431901409Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19126024" Dec 13 01:33:01.433054 containerd[1444]: time="2024-12-13T01:33:01.433005736Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.435174 containerd[1444]: time="2024-12-13T01:33:01.435137464Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.435834 containerd[1444]: time="2024-12-13T01:33:01.435780755Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 4.899777498s" Dec 13 01:33:01.435834 containerd[1444]: time="2024-12-13T01:33:01.435828278Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:33:01.440087 containerd[1444]: time="2024-12-13T01:33:01.440056571Z" level=info msg="CreateContainer within sandbox \"f19a8316744b7b524478f901eecf1652a04e20e7e8b2bdc939b2dcae12760ea8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:33:01.460158 containerd[1444]: time="2024-12-13T01:33:01.460104788Z" level=info msg="CreateContainer within sandbox \"f19a8316744b7b524478f901eecf1652a04e20e7e8b2bdc939b2dcae12760ea8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a59b14d7cfbef110edbe359fd73ca9a45caefaa1e9d990008907fd52d5dda390\"" Dec 13 01:33:01.461863 containerd[1444]: time="2024-12-13T01:33:01.460646990Z" level=info msg="StartContainer for \"a59b14d7cfbef110edbe359fd73ca9a45caefaa1e9d990008907fd52d5dda390\"" Dec 13 01:33:01.491246 systemd[1]: Started cri-containerd-a59b14d7cfbef110edbe359fd73ca9a45caefaa1e9d990008907fd52d5dda390.scope - libcontainer container a59b14d7cfbef110edbe359fd73ca9a45caefaa1e9d990008907fd52d5dda390. Dec 13 01:33:01.554149 containerd[1444]: time="2024-12-13T01:33:01.554090020Z" level=info msg="StartContainer for \"a59b14d7cfbef110edbe359fd73ca9a45caefaa1e9d990008907fd52d5dda390\" returns successfully" Dec 13 01:33:05.779496 kubelet[2557]: I1213 01:33:05.779423 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-54qnj" podStartSLOduration=4.877100325 podStartE2EDuration="9.77937288s" podCreationTimestamp="2024-12-13 01:32:56 +0000 UTC" firstStartedPulling="2024-12-13 01:32:56.534935632 +0000 UTC m=+15.410251862" lastFinishedPulling="2024-12-13 01:33:01.437208187 +0000 UTC m=+20.312524417" observedRunningTime="2024-12-13 01:33:02.303741424 +0000 UTC m=+21.179057654" watchObservedRunningTime="2024-12-13 01:33:05.77937288 +0000 UTC m=+24.654689110" Dec 13 01:33:05.780368 kubelet[2557]: I1213 01:33:05.780196 2557 topology_manager.go:215] "Topology Admit Handler" podUID="9d977fe5-03b4-463b-8f02-460bb4af0164" podNamespace="calico-system" podName="calico-typha-5db7c6847c-5r94r" Dec 13 01:33:05.792440 systemd[1]: Created slice kubepods-besteffort-pod9d977fe5_03b4_463b_8f02_460bb4af0164.slice - libcontainer container kubepods-besteffort-pod9d977fe5_03b4_463b_8f02_460bb4af0164.slice. Dec 13 01:33:05.822470 kubelet[2557]: I1213 01:33:05.822429 2557 topology_manager.go:215] "Topology Admit Handler" podUID="033b40f4-1fee-4973-a5ad-a46e74606e69" podNamespace="calico-system" podName="calico-node-ghwq9" Dec 13 01:33:05.830990 systemd[1]: Created slice kubepods-besteffort-pod033b40f4_1fee_4973_a5ad_a46e74606e69.slice - libcontainer container kubepods-besteffort-pod033b40f4_1fee_4973_a5ad_a46e74606e69.slice. Dec 13 01:33:05.881596 kubelet[2557]: I1213 01:33:05.881563 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d977fe5-03b4-463b-8f02-460bb4af0164-tigera-ca-bundle\") pod \"calico-typha-5db7c6847c-5r94r\" (UID: \"9d977fe5-03b4-463b-8f02-460bb4af0164\") " pod="calico-system/calico-typha-5db7c6847c-5r94r" Dec 13 01:33:05.886311 kubelet[2557]: I1213 01:33:05.886261 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwts\" (UniqueName: \"kubernetes.io/projected/9d977fe5-03b4-463b-8f02-460bb4af0164-kube-api-access-bnwts\") pod \"calico-typha-5db7c6847c-5r94r\" (UID: \"9d977fe5-03b4-463b-8f02-460bb4af0164\") " pod="calico-system/calico-typha-5db7c6847c-5r94r" Dec 13 01:33:05.886701 kubelet[2557]: I1213 01:33:05.886684 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9d977fe5-03b4-463b-8f02-460bb4af0164-typha-certs\") pod \"calico-typha-5db7c6847c-5r94r\" (UID: \"9d977fe5-03b4-463b-8f02-460bb4af0164\") " pod="calico-system/calico-typha-5db7c6847c-5r94r" Dec 13 01:33:05.929262 kubelet[2557]: I1213 01:33:05.929229 2557 topology_manager.go:215] "Topology Admit Handler" podUID="616ab489-e9c3-404d-8315-b87df53098e2" podNamespace="calico-system" podName="csi-node-driver-f78qq" Dec 13 01:33:05.930371 kubelet[2557]: E1213 01:33:05.930346 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78qq" podUID="616ab489-e9c3-404d-8315-b87df53098e2" Dec 13 01:33:05.987486 kubelet[2557]: I1213 01:33:05.987451 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rg5q\" (UniqueName: \"kubernetes.io/projected/616ab489-e9c3-404d-8315-b87df53098e2-kube-api-access-4rg5q\") pod \"csi-node-driver-f78qq\" (UID: \"616ab489-e9c3-404d-8315-b87df53098e2\") " pod="calico-system/csi-node-driver-f78qq" Dec 13 01:33:05.987486 kubelet[2557]: I1213 01:33:05.987501 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-var-lib-calico\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.987486 kubelet[2557]: I1213 01:33:05.987565 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-cni-log-dir\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.987486 kubelet[2557]: I1213 01:33:05.987622 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/616ab489-e9c3-404d-8315-b87df53098e2-registration-dir\") pod \"csi-node-driver-f78qq\" (UID: \"616ab489-e9c3-404d-8315-b87df53098e2\") " pod="calico-system/csi-node-driver-f78qq" Dec 13 01:33:05.987486 kubelet[2557]: I1213 01:33:05.987681 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/033b40f4-1fee-4973-a5ad-a46e74606e69-node-certs\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.988064 kubelet[2557]: I1213 01:33:05.987701 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/616ab489-e9c3-404d-8315-b87df53098e2-varrun\") pod \"csi-node-driver-f78qq\" (UID: \"616ab489-e9c3-404d-8315-b87df53098e2\") " pod="calico-system/csi-node-driver-f78qq" Dec 13 01:33:05.988064 kubelet[2557]: I1213 01:33:05.987728 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-lib-modules\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.988064 kubelet[2557]: I1213 01:33:05.987747 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpm7v\" (UniqueName: \"kubernetes.io/projected/033b40f4-1fee-4973-a5ad-a46e74606e69-kube-api-access-dpm7v\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.988064 kubelet[2557]: I1213 01:33:05.987765 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-cni-bin-dir\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.988064 kubelet[2557]: I1213 01:33:05.987787 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-cni-net-dir\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.988170 kubelet[2557]: I1213 01:33:05.987805 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/616ab489-e9c3-404d-8315-b87df53098e2-socket-dir\") pod \"csi-node-driver-f78qq\" (UID: \"616ab489-e9c3-404d-8315-b87df53098e2\") " pod="calico-system/csi-node-driver-f78qq" Dec 13 01:33:05.990304 kubelet[2557]: I1213 01:33:05.990059 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-xtables-lock\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.990304 kubelet[2557]: I1213 01:33:05.990106 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-var-run-calico\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.990304 kubelet[2557]: I1213 01:33:05.990125 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/616ab489-e9c3-404d-8315-b87df53098e2-kubelet-dir\") pod \"csi-node-driver-f78qq\" (UID: \"616ab489-e9c3-404d-8315-b87df53098e2\") " pod="calico-system/csi-node-driver-f78qq" Dec 13 01:33:05.990304 kubelet[2557]: I1213 01:33:05.990146 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-policysync\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.990304 kubelet[2557]: I1213 01:33:05.990167 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/033b40f4-1fee-4973-a5ad-a46e74606e69-tigera-ca-bundle\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:05.990482 kubelet[2557]: I1213 01:33:05.990189 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/033b40f4-1fee-4973-a5ad-a46e74606e69-flexvol-driver-host\") pod \"calico-node-ghwq9\" (UID: \"033b40f4-1fee-4973-a5ad-a46e74606e69\") " pod="calico-system/calico-node-ghwq9" Dec 13 01:33:06.100553 kubelet[2557]: E1213 01:33:06.100198 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:06.100553 kubelet[2557]: W1213 01:33:06.100217 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:06.100553 kubelet[2557]: E1213 01:33:06.100245 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:06.100553 kubelet[2557]: E1213 01:33:06.100417 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:06.100553 kubelet[2557]: W1213 01:33:06.100426 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:06.100553 kubelet[2557]: E1213 01:33:06.100437 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:06.102597 kubelet[2557]: E1213 01:33:06.101404 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:06.102878 kubelet[2557]: E1213 01:33:06.102860 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:06.102966 kubelet[2557]: W1213 01:33:06.102951 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:06.103022 kubelet[2557]: E1213 01:33:06.103012 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:06.103356 containerd[1444]: time="2024-12-13T01:33:06.103322413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db7c6847c-5r94r,Uid:9d977fe5-03b4-463b-8f02-460bb4af0164,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:06.104718 kubelet[2557]: E1213 01:33:06.104700 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:06.104839 kubelet[2557]: W1213 01:33:06.104824 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:06.104986 kubelet[2557]: E1213 01:33:06.104956 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:06.122466 containerd[1444]: time="2024-12-13T01:33:06.122333282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:06.122466 containerd[1444]: time="2024-12-13T01:33:06.122425808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:06.122608 containerd[1444]: time="2024-12-13T01:33:06.122438489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:06.123759 containerd[1444]: time="2024-12-13T01:33:06.123675089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:06.133715 kubelet[2557]: E1213 01:33:06.133485 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:06.135218 containerd[1444]: time="2024-12-13T01:33:06.134235372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ghwq9,Uid:033b40f4-1fee-4973-a5ad-a46e74606e69,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:06.147069 systemd[1]: Started cri-containerd-048fc5aeeb3ef78ae3ac14129c8cacb2a66e88d6a78ff2fa9b2bf1eed95170f5.scope - libcontainer container 048fc5aeeb3ef78ae3ac14129c8cacb2a66e88d6a78ff2fa9b2bf1eed95170f5. Dec 13 01:33:06.176695 containerd[1444]: time="2024-12-13T01:33:06.176168964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db7c6847c-5r94r,Uid:9d977fe5-03b4-463b-8f02-460bb4af0164,Namespace:calico-system,Attempt:0,} returns sandbox id \"048fc5aeeb3ef78ae3ac14129c8cacb2a66e88d6a78ff2fa9b2bf1eed95170f5\"" Dec 13 01:33:06.177575 kubelet[2557]: E1213 01:33:06.177185 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:06.179954 containerd[1444]: time="2024-12-13T01:33:06.179757796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:33:06.189226 containerd[1444]: time="2024-12-13T01:33:06.189145363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:06.189322 containerd[1444]: time="2024-12-13T01:33:06.189212568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:06.189322 containerd[1444]: time="2024-12-13T01:33:06.189228009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:06.189322 containerd[1444]: time="2024-12-13T01:33:06.189303014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:06.208029 systemd[1]: Started cri-containerd-943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22.scope - libcontainer container 943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22. Dec 13 01:33:06.231160 containerd[1444]: time="2024-12-13T01:33:06.231126839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ghwq9,Uid:033b40f4-1fee-4973-a5ad-a46e74606e69,Namespace:calico-system,Attempt:0,} returns sandbox id \"943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22\"" Dec 13 01:33:06.231854 kubelet[2557]: E1213 01:33:06.231745 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:07.222576 kubelet[2557]: E1213 01:33:07.222526 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78qq" podUID="616ab489-e9c3-404d-8315-b87df53098e2" Dec 13 01:33:07.254179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019480084.mount: Deactivated successfully. Dec 13 01:33:07.576298 containerd[1444]: time="2024-12-13T01:33:07.576197069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:07.577329 containerd[1444]: time="2024-12-13T01:33:07.577231214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:33:07.578254 containerd[1444]: time="2024-12-13T01:33:07.578198314Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:07.580844 containerd[1444]: time="2024-12-13T01:33:07.580630506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:07.582062 containerd[1444]: time="2024-12-13T01:33:07.582029913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.402240754s" Dec 13 01:33:07.582103 containerd[1444]: time="2024-12-13T01:33:07.582059875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:33:07.583031 containerd[1444]: time="2024-12-13T01:33:07.582998133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:33:07.588676 containerd[1444]: time="2024-12-13T01:33:07.588642685Z" level=info msg="CreateContainer within sandbox \"048fc5aeeb3ef78ae3ac14129c8cacb2a66e88d6a78ff2fa9b2bf1eed95170f5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:33:07.599259 containerd[1444]: time="2024-12-13T01:33:07.599213465Z" level=info msg="CreateContainer within sandbox \"048fc5aeeb3ef78ae3ac14129c8cacb2a66e88d6a78ff2fa9b2bf1eed95170f5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"69e9e9b3a4986d298de7326849a09e3b154a6714a8af0a9193646911655b0aa3\"" Dec 13 01:33:07.600992 containerd[1444]: time="2024-12-13T01:33:07.600952813Z" level=info msg="StartContainer for \"69e9e9b3a4986d298de7326849a09e3b154a6714a8af0a9193646911655b0aa3\"" Dec 13 01:33:07.632015 systemd[1]: Started cri-containerd-69e9e9b3a4986d298de7326849a09e3b154a6714a8af0a9193646911655b0aa3.scope - libcontainer container 69e9e9b3a4986d298de7326849a09e3b154a6714a8af0a9193646911655b0aa3. Dec 13 01:33:07.663686 containerd[1444]: time="2024-12-13T01:33:07.663647604Z" level=info msg="StartContainer for \"69e9e9b3a4986d298de7326849a09e3b154a6714a8af0a9193646911655b0aa3\" returns successfully" Dec 13 01:33:08.310498 kubelet[2557]: E1213 01:33:08.309798 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:08.319459 kubelet[2557]: I1213 01:33:08.319423 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5db7c6847c-5r94r" podStartSLOduration=1.916512348 podStartE2EDuration="3.319385983s" podCreationTimestamp="2024-12-13 01:33:05 +0000 UTC" firstStartedPulling="2024-12-13 01:33:06.179429215 +0000 UTC m=+25.054745445" lastFinishedPulling="2024-12-13 01:33:07.58230285 +0000 UTC m=+26.457619080" observedRunningTime="2024-12-13 01:33:08.318645979 +0000 UTC m=+27.193962209" watchObservedRunningTime="2024-12-13 01:33:08.319385983 +0000 UTC m=+27.194702173" Dec 13 01:33:08.404698 kubelet[2557]: E1213 01:33:08.404666 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.404698 kubelet[2557]: W1213 01:33:08.404685 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.404698 kubelet[2557]: E1213 01:33:08.404702 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.405038 kubelet[2557]: E1213 01:33:08.405007 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.405038 kubelet[2557]: W1213 01:33:08.405018 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.405038 kubelet[2557]: E1213 01:33:08.405029 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.405216 kubelet[2557]: E1213 01:33:08.405195 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.405216 kubelet[2557]: W1213 01:33:08.405206 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.405216 kubelet[2557]: E1213 01:33:08.405217 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.405396 kubelet[2557]: E1213 01:33:08.405376 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.405396 kubelet[2557]: W1213 01:33:08.405387 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.405396 kubelet[2557]: E1213 01:33:08.405397 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.405571 kubelet[2557]: E1213 01:33:08.405553 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.405571 kubelet[2557]: W1213 01:33:08.405564 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.405621 kubelet[2557]: E1213 01:33:08.405575 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.405752 kubelet[2557]: E1213 01:33:08.405731 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.405752 kubelet[2557]: W1213 01:33:08.405741 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.405799 kubelet[2557]: E1213 01:33:08.405757 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.405939 kubelet[2557]: E1213 01:33:08.405914 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.405939 kubelet[2557]: W1213 01:33:08.405929 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.405939 kubelet[2557]: E1213 01:33:08.405939 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.406209 kubelet[2557]: E1213 01:33:08.406184 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.406209 kubelet[2557]: W1213 01:33:08.406194 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.406209 kubelet[2557]: E1213 01:33:08.406204 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.406372 kubelet[2557]: E1213 01:33:08.406352 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.406372 kubelet[2557]: W1213 01:33:08.406370 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.406419 kubelet[2557]: E1213 01:33:08.406380 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.406508 kubelet[2557]: E1213 01:33:08.406498 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.406508 kubelet[2557]: W1213 01:33:08.406507 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.406552 kubelet[2557]: E1213 01:33:08.406520 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.406695 kubelet[2557]: E1213 01:33:08.406646 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.406695 kubelet[2557]: W1213 01:33:08.406660 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.406695 kubelet[2557]: E1213 01:33:08.406669 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.406813 kubelet[2557]: E1213 01:33:08.406793 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.406845 kubelet[2557]: W1213 01:33:08.406824 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.406845 kubelet[2557]: E1213 01:33:08.406835 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.407037 kubelet[2557]: E1213 01:33:08.407021 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.407037 kubelet[2557]: W1213 01:33:08.407032 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.407100 kubelet[2557]: E1213 01:33:08.407042 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.407194 kubelet[2557]: E1213 01:33:08.407182 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.407194 kubelet[2557]: W1213 01:33:08.407190 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.407237 kubelet[2557]: E1213 01:33:08.407200 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.407338 kubelet[2557]: E1213 01:33:08.407328 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.407361 kubelet[2557]: W1213 01:33:08.407343 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.407361 kubelet[2557]: E1213 01:33:08.407353 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.407544 kubelet[2557]: E1213 01:33:08.407533 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.407565 kubelet[2557]: W1213 01:33:08.407544 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.407588 kubelet[2557]: E1213 01:33:08.407575 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.407784 kubelet[2557]: E1213 01:33:08.407724 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.407784 kubelet[2557]: W1213 01:33:08.407733 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.407784 kubelet[2557]: E1213 01:33:08.407760 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.407981 kubelet[2557]: E1213 01:33:08.407969 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.407981 kubelet[2557]: W1213 01:33:08.407979 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.408029 kubelet[2557]: E1213 01:33:08.407993 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.408169 kubelet[2557]: E1213 01:33:08.408158 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.408169 kubelet[2557]: W1213 01:33:08.408168 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.408213 kubelet[2557]: E1213 01:33:08.408184 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.408340 kubelet[2557]: E1213 01:33:08.408330 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.408365 kubelet[2557]: W1213 01:33:08.408340 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.408365 kubelet[2557]: E1213 01:33:08.408353 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.408494 kubelet[2557]: E1213 01:33:08.408485 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.408517 kubelet[2557]: W1213 01:33:08.408493 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.408517 kubelet[2557]: E1213 01:33:08.408506 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.408670 kubelet[2557]: E1213 01:33:08.408661 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.408692 kubelet[2557]: W1213 01:33:08.408670 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.408692 kubelet[2557]: E1213 01:33:08.408680 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.408994 kubelet[2557]: E1213 01:33:08.408972 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.408994 kubelet[2557]: W1213 01:33:08.408989 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.409050 kubelet[2557]: E1213 01:33:08.409040 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.409227 kubelet[2557]: E1213 01:33:08.409201 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.409227 kubelet[2557]: W1213 01:33:08.409213 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.409294 kubelet[2557]: E1213 01:33:08.409236 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.409371 kubelet[2557]: E1213 01:33:08.409358 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.409371 kubelet[2557]: W1213 01:33:08.409369 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.409417 kubelet[2557]: E1213 01:33:08.409382 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.409556 kubelet[2557]: E1213 01:33:08.409532 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.409556 kubelet[2557]: W1213 01:33:08.409543 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.409612 kubelet[2557]: E1213 01:33:08.409578 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.409738 kubelet[2557]: E1213 01:33:08.409726 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.409738 kubelet[2557]: W1213 01:33:08.409736 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.409793 kubelet[2557]: E1213 01:33:08.409754 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.409965 kubelet[2557]: E1213 01:33:08.409946 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.409965 kubelet[2557]: W1213 01:33:08.409956 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.409965 kubelet[2557]: E1213 01:33:08.409966 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.410305 kubelet[2557]: E1213 01:33:08.410282 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.410305 kubelet[2557]: W1213 01:33:08.410295 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.410356 kubelet[2557]: E1213 01:33:08.410310 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.410482 kubelet[2557]: E1213 01:33:08.410461 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.410482 kubelet[2557]: W1213 01:33:08.410476 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.410541 kubelet[2557]: E1213 01:33:08.410487 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.410639 kubelet[2557]: E1213 01:33:08.410627 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.410639 kubelet[2557]: W1213 01:33:08.410637 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.410692 kubelet[2557]: E1213 01:33:08.410647 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.410919 kubelet[2557]: E1213 01:33:08.410878 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.410919 kubelet[2557]: W1213 01:33:08.410893 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.410919 kubelet[2557]: E1213 01:33:08.410905 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.415450 kubelet[2557]: E1213 01:33:08.414804 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:08.415450 kubelet[2557]: W1213 01:33:08.414886 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:08.415450 kubelet[2557]: E1213 01:33:08.414902 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:08.718468 containerd[1444]: time="2024-12-13T01:33:08.717950108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:08.719347 containerd[1444]: time="2024-12-13T01:33:08.719311230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:33:08.720328 containerd[1444]: time="2024-12-13T01:33:08.720287009Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:08.722350 containerd[1444]: time="2024-12-13T01:33:08.722285729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:08.723195 containerd[1444]: time="2024-12-13T01:33:08.722741437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.139712741s" Dec 13 01:33:08.723195 containerd[1444]: time="2024-12-13T01:33:08.722783919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:33:08.724912 containerd[1444]: time="2024-12-13T01:33:08.724627510Z" level=info msg="CreateContainer within sandbox \"943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:33:08.737361 containerd[1444]: time="2024-12-13T01:33:08.737327475Z" level=info msg="CreateContainer within sandbox \"943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a\"" Dec 13 01:33:08.738805 containerd[1444]: time="2024-12-13T01:33:08.737701098Z" level=info msg="StartContainer for \"b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a\"" Dec 13 01:33:08.763969 systemd[1]: Started cri-containerd-b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a.scope - libcontainer container b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a. Dec 13 01:33:08.791604 containerd[1444]: time="2024-12-13T01:33:08.791552661Z" level=info msg="StartContainer for \"b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a\" returns successfully" Dec 13 01:33:08.808179 systemd[1]: cri-containerd-b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a.scope: Deactivated successfully. Dec 13 01:33:08.843381 containerd[1444]: time="2024-12-13T01:33:08.843186251Z" level=info msg="shim disconnected" id=b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a namespace=k8s.io Dec 13 01:33:08.843381 containerd[1444]: time="2024-12-13T01:33:08.843244494Z" level=warning msg="cleaning up after shim disconnected" id=b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a namespace=k8s.io Dec 13 01:33:08.843381 containerd[1444]: time="2024-12-13T01:33:08.843252335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:09.001703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b280e15d449c0baf0c5097064c2aec8b7c44a3168f8b13cf5b28387e743e0b5a-rootfs.mount: Deactivated successfully. Dec 13 01:33:09.222034 kubelet[2557]: E1213 01:33:09.221961 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78qq" podUID="616ab489-e9c3-404d-8315-b87df53098e2" Dec 13 01:33:09.312525 kubelet[2557]: E1213 01:33:09.312139 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:09.313162 containerd[1444]: time="2024-12-13T01:33:09.313036840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:33:09.314682 kubelet[2557]: I1213 01:33:09.314081 2557 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:09.314682 kubelet[2557]: E1213 01:33:09.314624 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:11.222405 kubelet[2557]: E1213 01:33:11.222366 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78qq" podUID="616ab489-e9c3-404d-8315-b87df53098e2" Dec 13 01:33:11.976652 containerd[1444]: time="2024-12-13T01:33:11.976598985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:11.977374 containerd[1444]: time="2024-12-13T01:33:11.977328625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:33:11.977896 containerd[1444]: time="2024-12-13T01:33:11.977863374Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:11.980799 containerd[1444]: time="2024-12-13T01:33:11.980759332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:11.981285 containerd[1444]: time="2024-12-13T01:33:11.981254039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.668182118s" Dec 13 01:33:11.981322 containerd[1444]: time="2024-12-13T01:33:11.981283721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:33:11.982676 containerd[1444]: time="2024-12-13T01:33:11.982626154Z" level=info msg="CreateContainer within sandbox \"943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:33:12.010609 containerd[1444]: time="2024-12-13T01:33:12.010532060Z" level=info msg="CreateContainer within sandbox \"943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd\"" Dec 13 01:33:12.011511 containerd[1444]: time="2024-12-13T01:33:12.011395266Z" level=info msg="StartContainer for \"408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd\"" Dec 13 01:33:12.042041 systemd[1]: Started cri-containerd-408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd.scope - libcontainer container 408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd. Dec 13 01:33:12.065009 containerd[1444]: time="2024-12-13T01:33:12.064976020Z" level=info msg="StartContainer for \"408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd\" returns successfully" Dec 13 01:33:12.324524 kubelet[2557]: E1213 01:33:12.323609 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:12.643955 systemd[1]: cri-containerd-408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd.scope: Deactivated successfully. Dec 13 01:33:12.663461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd-rootfs.mount: Deactivated successfully. Dec 13 01:33:12.683160 kubelet[2557]: I1213 01:33:12.683054 2557 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:33:12.694582 containerd[1444]: time="2024-12-13T01:33:12.694498153Z" level=info msg="shim disconnected" id=408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd namespace=k8s.io Dec 13 01:33:12.694582 containerd[1444]: time="2024-12-13T01:33:12.694575677Z" level=warning msg="cleaning up after shim disconnected" id=408dbe29388233a9a22751d14579c7af1bff3ddc6f6f2b4c8614a479d156cbfd namespace=k8s.io Dec 13 01:33:12.694582 containerd[1444]: time="2024-12-13T01:33:12.694585438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:12.712888 kubelet[2557]: I1213 01:33:12.712848 2557 topology_manager.go:215] "Topology Admit Handler" podUID="14a279e5-ffed-4fd4-a4aa-0fc271831c85" podNamespace="kube-system" podName="coredns-76f75df574-f98xf" Dec 13 01:33:12.713347 containerd[1444]: time="2024-12-13T01:33:12.712972610Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:33:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:33:12.714438 kubelet[2557]: I1213 01:33:12.714406 2557 topology_manager.go:215] "Topology Admit Handler" podUID="4c7b6da3-bd72-4f9f-b426-ad3b46174127" podNamespace="calico-system" podName="calico-kube-controllers-7fd5945fd6-dcfsq" Dec 13 01:33:12.714680 kubelet[2557]: I1213 01:33:12.714527 2557 topology_manager.go:215] "Topology Admit Handler" podUID="98996d25-3276-48cb-98f2-b0369f62d55a" podNamespace="calico-apiserver" podName="calico-apiserver-d9556f564-qscm5" Dec 13 01:33:12.715174 kubelet[2557]: I1213 01:33:12.715103 2557 topology_manager.go:215] "Topology Admit Handler" podUID="fd5b119e-695f-466c-85b1-1df84ffeb4f8" podNamespace="kube-system" podName="coredns-76f75df574-rwfq6" Dec 13 01:33:12.715778 kubelet[2557]: I1213 01:33:12.715492 2557 topology_manager.go:215] "Topology Admit Handler" podUID="7417bd6c-d06e-4e11-95c2-509efea5ad02" podNamespace="calico-apiserver" podName="calico-apiserver-d9556f564-hkcx8" Dec 13 01:33:12.727825 systemd[1]: Created slice kubepods-burstable-pod14a279e5_ffed_4fd4_a4aa_0fc271831c85.slice - libcontainer container kubepods-burstable-pod14a279e5_ffed_4fd4_a4aa_0fc271831c85.slice. Dec 13 01:33:12.734301 systemd[1]: Created slice kubepods-besteffort-pod4c7b6da3_bd72_4f9f_b426_ad3b46174127.slice - libcontainer container kubepods-besteffort-pod4c7b6da3_bd72_4f9f_b426_ad3b46174127.slice. Dec 13 01:33:12.740672 systemd[1]: Created slice kubepods-burstable-podfd5b119e_695f_466c_85b1_1df84ffeb4f8.slice - libcontainer container kubepods-burstable-podfd5b119e_695f_466c_85b1_1df84ffeb4f8.slice. Dec 13 01:33:12.748069 systemd[1]: Created slice kubepods-besteffort-pod98996d25_3276_48cb_98f2_b0369f62d55a.slice - libcontainer container kubepods-besteffort-pod98996d25_3276_48cb_98f2_b0369f62d55a.slice. Dec 13 01:33:12.751503 systemd[1]: Created slice kubepods-besteffort-pod7417bd6c_d06e_4e11_95c2_509efea5ad02.slice - libcontainer container kubepods-besteffort-pod7417bd6c_d06e_4e11_95c2_509efea5ad02.slice. Dec 13 01:33:12.861268 kubelet[2557]: I1213 01:33:12.861227 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7417bd6c-d06e-4e11-95c2-509efea5ad02-calico-apiserver-certs\") pod \"calico-apiserver-d9556f564-hkcx8\" (UID: \"7417bd6c-d06e-4e11-95c2-509efea5ad02\") " pod="calico-apiserver/calico-apiserver-d9556f564-hkcx8" Dec 13 01:33:12.861268 kubelet[2557]: I1213 01:33:12.861274 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c7b6da3-bd72-4f9f-b426-ad3b46174127-tigera-ca-bundle\") pod \"calico-kube-controllers-7fd5945fd6-dcfsq\" (UID: \"4c7b6da3-bd72-4f9f-b426-ad3b46174127\") " pod="calico-system/calico-kube-controllers-7fd5945fd6-dcfsq" Dec 13 01:33:12.861435 kubelet[2557]: I1213 01:33:12.861301 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99sx\" (UniqueName: \"kubernetes.io/projected/14a279e5-ffed-4fd4-a4aa-0fc271831c85-kube-api-access-w99sx\") pod \"coredns-76f75df574-f98xf\" (UID: \"14a279e5-ffed-4fd4-a4aa-0fc271831c85\") " pod="kube-system/coredns-76f75df574-f98xf" Dec 13 01:33:12.861435 kubelet[2557]: I1213 01:33:12.861323 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pftnq\" (UniqueName: \"kubernetes.io/projected/fd5b119e-695f-466c-85b1-1df84ffeb4f8-kube-api-access-pftnq\") pod \"coredns-76f75df574-rwfq6\" (UID: \"fd5b119e-695f-466c-85b1-1df84ffeb4f8\") " pod="kube-system/coredns-76f75df574-rwfq6" Dec 13 01:33:12.861435 kubelet[2557]: I1213 01:33:12.861354 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kfl7\" (UniqueName: \"kubernetes.io/projected/98996d25-3276-48cb-98f2-b0369f62d55a-kube-api-access-8kfl7\") pod \"calico-apiserver-d9556f564-qscm5\" (UID: \"98996d25-3276-48cb-98f2-b0369f62d55a\") " pod="calico-apiserver/calico-apiserver-d9556f564-qscm5" Dec 13 01:33:12.861435 kubelet[2557]: I1213 01:33:12.861374 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5b119e-695f-466c-85b1-1df84ffeb4f8-config-volume\") pod \"coredns-76f75df574-rwfq6\" (UID: \"fd5b119e-695f-466c-85b1-1df84ffeb4f8\") " pod="kube-system/coredns-76f75df574-rwfq6" Dec 13 01:33:12.861435 kubelet[2557]: I1213 01:33:12.861395 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csd5h\" (UniqueName: \"kubernetes.io/projected/4c7b6da3-bd72-4f9f-b426-ad3b46174127-kube-api-access-csd5h\") pod \"calico-kube-controllers-7fd5945fd6-dcfsq\" (UID: \"4c7b6da3-bd72-4f9f-b426-ad3b46174127\") " pod="calico-system/calico-kube-controllers-7fd5945fd6-dcfsq" Dec 13 01:33:12.861581 kubelet[2557]: I1213 01:33:12.861417 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49v8h\" (UniqueName: \"kubernetes.io/projected/7417bd6c-d06e-4e11-95c2-509efea5ad02-kube-api-access-49v8h\") pod \"calico-apiserver-d9556f564-hkcx8\" (UID: \"7417bd6c-d06e-4e11-95c2-509efea5ad02\") " pod="calico-apiserver/calico-apiserver-d9556f564-hkcx8" Dec 13 01:33:12.861581 kubelet[2557]: I1213 01:33:12.861445 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14a279e5-ffed-4fd4-a4aa-0fc271831c85-config-volume\") pod \"coredns-76f75df574-f98xf\" (UID: \"14a279e5-ffed-4fd4-a4aa-0fc271831c85\") " pod="kube-system/coredns-76f75df574-f98xf" Dec 13 01:33:12.861581 kubelet[2557]: I1213 01:33:12.861481 2557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98996d25-3276-48cb-98f2-b0369f62d55a-calico-apiserver-certs\") pod \"calico-apiserver-d9556f564-qscm5\" (UID: \"98996d25-3276-48cb-98f2-b0369f62d55a\") " pod="calico-apiserver/calico-apiserver-d9556f564-qscm5" Dec 13 01:33:13.026750 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:34576.service - OpenSSH per-connection server daemon (10.0.0.1:34576). Dec 13 01:33:13.032840 kubelet[2557]: E1213 01:33:13.032608 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:13.033405 containerd[1444]: time="2024-12-13T01:33:13.033364944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f98xf,Uid:14a279e5-ffed-4fd4-a4aa-0fc271831c85,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:13.038576 containerd[1444]: time="2024-12-13T01:33:13.038399842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd5945fd6-dcfsq,Uid:4c7b6da3-bd72-4f9f-b426-ad3b46174127,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:13.044965 kubelet[2557]: E1213 01:33:13.044924 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:13.045709 containerd[1444]: time="2024-12-13T01:33:13.045671456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rwfq6,Uid:fd5b119e-695f-466c-85b1-1df84ffeb4f8,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:13.053252 containerd[1444]: time="2024-12-13T01:33:13.053076836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9556f564-qscm5,Uid:98996d25-3276-48cb-98f2-b0369f62d55a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:33:13.054782 containerd[1444]: time="2024-12-13T01:33:13.054535671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9556f564-hkcx8,Uid:7417bd6c-d06e-4e11-95c2-509efea5ad02,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:33:13.083165 sshd[3280]: Accepted publickey for core from 10.0.0.1 port 34576 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:13.084890 sshd[3280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:13.149884 systemd-logind[1427]: New session 8 of user core. Dec 13 01:33:13.171772 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:33:13.246097 systemd[1]: Created slice kubepods-besteffort-pod616ab489_e9c3_404d_8315_b87df53098e2.slice - libcontainer container kubepods-besteffort-pod616ab489_e9c3_404d_8315_b87df53098e2.slice. Dec 13 01:33:13.254643 containerd[1444]: time="2024-12-13T01:33:13.254586659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f78qq,Uid:616ab489-e9c3-404d-8315-b87df53098e2,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:13.341833 kubelet[2557]: E1213 01:33:13.341439 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:13.343281 containerd[1444]: time="2024-12-13T01:33:13.343068841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:33:13.397877 sshd[3280]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:13.402790 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:34576.service: Deactivated successfully. Dec 13 01:33:13.407643 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:33:13.410415 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:33:13.411760 systemd-logind[1427]: Removed session 8. Dec 13 01:33:13.432608 containerd[1444]: time="2024-12-13T01:33:13.432549554Z" level=error msg="Failed to destroy network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.433910 containerd[1444]: time="2024-12-13T01:33:13.433834420Z" level=error msg="encountered an error cleaning up failed sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.433910 containerd[1444]: time="2024-12-13T01:33:13.433888583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9556f564-hkcx8,Uid:7417bd6c-d06e-4e11-95c2-509efea5ad02,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.443413 containerd[1444]: time="2024-12-13T01:33:13.440961226Z" level=error msg="Failed to destroy network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.443413 containerd[1444]: time="2024-12-13T01:33:13.443381750Z" level=error msg="Failed to destroy network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.443729 containerd[1444]: time="2024-12-13T01:33:13.443692646Z" level=error msg="encountered an error cleaning up failed sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.443781 containerd[1444]: time="2024-12-13T01:33:13.443762049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f98xf,Uid:14a279e5-ffed-4fd4-a4aa-0fc271831c85,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.443995 containerd[1444]: time="2024-12-13T01:33:13.443925578Z" level=error msg="encountered an error cleaning up failed sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.444051 containerd[1444]: time="2024-12-13T01:33:13.444007502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rwfq6,Uid:fd5b119e-695f-466c-85b1-1df84ffeb4f8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.444270 kubelet[2557]: E1213 01:33:13.441103 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.444333 kubelet[2557]: E1213 01:33:13.444316 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d9556f564-hkcx8" Dec 13 01:33:13.444362 kubelet[2557]: E1213 01:33:13.444339 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d9556f564-hkcx8" Dec 13 01:33:13.444416 kubelet[2557]: E1213 01:33:13.444398 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d9556f564-hkcx8_calico-apiserver(7417bd6c-d06e-4e11-95c2-509efea5ad02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d9556f564-hkcx8_calico-apiserver(7417bd6c-d06e-4e11-95c2-509efea5ad02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d9556f564-hkcx8" podUID="7417bd6c-d06e-4e11-95c2-509efea5ad02" Dec 13 01:33:13.444897 kubelet[2557]: E1213 01:33:13.443990 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.444957 kubelet[2557]: E1213 01:33:13.444913 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f98xf" Dec 13 01:33:13.444957 kubelet[2557]: E1213 01:33:13.444936 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f98xf" Dec 13 01:33:13.445008 kubelet[2557]: E1213 01:33:13.444971 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f98xf_kube-system(14a279e5-ffed-4fd4-a4aa-0fc271831c85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f98xf_kube-system(14a279e5-ffed-4fd4-a4aa-0fc271831c85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f98xf" podUID="14a279e5-ffed-4fd4-a4aa-0fc271831c85" Dec 13 01:33:13.445008 kubelet[2557]: E1213 01:33:13.444167 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.445008 kubelet[2557]: E1213 01:33:13.445007 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rwfq6" Dec 13 01:33:13.445117 kubelet[2557]: E1213 01:33:13.445032 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rwfq6" Dec 13 01:33:13.445117 kubelet[2557]: E1213 01:33:13.445060 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rwfq6_kube-system(fd5b119e-695f-466c-85b1-1df84ffeb4f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rwfq6_kube-system(fd5b119e-695f-466c-85b1-1df84ffeb4f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rwfq6" podUID="fd5b119e-695f-466c-85b1-1df84ffeb4f8" Dec 13 01:33:13.447334 containerd[1444]: time="2024-12-13T01:33:13.447295031Z" level=error msg="Failed to destroy network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.447804 containerd[1444]: time="2024-12-13T01:33:13.447761975Z" level=error msg="encountered an error cleaning up failed sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.447878 containerd[1444]: time="2024-12-13T01:33:13.447819738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd5945fd6-dcfsq,Uid:4c7b6da3-bd72-4f9f-b426-ad3b46174127,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.448172 kubelet[2557]: E1213 01:33:13.448039 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.448172 kubelet[2557]: E1213 01:33:13.448076 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd5945fd6-dcfsq" Dec 13 01:33:13.448172 kubelet[2557]: E1213 01:33:13.448094 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd5945fd6-dcfsq" Dec 13 01:33:13.448271 kubelet[2557]: E1213 01:33:13.448148 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fd5945fd6-dcfsq_calico-system(4c7b6da3-bd72-4f9f-b426-ad3b46174127)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fd5945fd6-dcfsq_calico-system(4c7b6da3-bd72-4f9f-b426-ad3b46174127)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd5945fd6-dcfsq" podUID="4c7b6da3-bd72-4f9f-b426-ad3b46174127" Dec 13 01:33:13.450660 containerd[1444]: time="2024-12-13T01:33:13.450615961Z" level=error msg="Failed to destroy network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.450984 containerd[1444]: time="2024-12-13T01:33:13.450957779Z" level=error msg="encountered an error cleaning up failed sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.451026 containerd[1444]: time="2024-12-13T01:33:13.451000821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9556f564-qscm5,Uid:98996d25-3276-48cb-98f2-b0369f62d55a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.451204 kubelet[2557]: E1213 01:33:13.451187 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.451343 kubelet[2557]: E1213 01:33:13.451328 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d9556f564-qscm5" Dec 13 01:33:13.451381 kubelet[2557]: E1213 01:33:13.451354 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d9556f564-qscm5" Dec 13 01:33:13.451425 kubelet[2557]: E1213 01:33:13.451408 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d9556f564-qscm5_calico-apiserver(98996d25-3276-48cb-98f2-b0369f62d55a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d9556f564-qscm5_calico-apiserver(98996d25-3276-48cb-98f2-b0369f62d55a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d9556f564-qscm5" podUID="98996d25-3276-48cb-98f2-b0369f62d55a" Dec 13 01:33:13.456237 containerd[1444]: time="2024-12-13T01:33:13.455983837Z" level=error msg="Failed to destroy network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.456589 containerd[1444]: time="2024-12-13T01:33:13.456496543Z" level=error msg="encountered an error cleaning up failed sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.456589 containerd[1444]: time="2024-12-13T01:33:13.456545785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f78qq,Uid:616ab489-e9c3-404d-8315-b87df53098e2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.456841 kubelet[2557]: E1213 01:33:13.456804 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:13.456920 kubelet[2557]: E1213 01:33:13.456862 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f78qq" Dec 13 01:33:13.456920 kubelet[2557]: E1213 01:33:13.456889 2557 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f78qq" Dec 13 01:33:13.457026 kubelet[2557]: E1213 01:33:13.456932 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f78qq_calico-system(616ab489-e9c3-404d-8315-b87df53098e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f78qq_calico-system(616ab489-e9c3-404d-8315-b87df53098e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f78qq" podUID="616ab489-e9c3-404d-8315-b87df53098e2" Dec 13 01:33:14.006450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60-shm.mount: Deactivated successfully. Dec 13 01:33:14.006537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b-shm.mount: Deactivated successfully. Dec 13 01:33:14.347616 kubelet[2557]: I1213 01:33:14.347508 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:14.349315 containerd[1444]: time="2024-12-13T01:33:14.349178815Z" level=info msg="StopPodSandbox for \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\"" Dec 13 01:33:14.350892 containerd[1444]: time="2024-12-13T01:33:14.349360664Z" level=info msg="Ensure that sandbox b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0 in task-service has been cleanup successfully" Dec 13 01:33:14.350892 containerd[1444]: time="2024-12-13T01:33:14.350461039Z" level=info msg="StopPodSandbox for \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\"" Dec 13 01:33:14.350892 containerd[1444]: time="2024-12-13T01:33:14.350592205Z" level=info msg="Ensure that sandbox 27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60 in task-service has been cleanup successfully" Dec 13 01:33:14.350971 kubelet[2557]: I1213 01:33:14.349743 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:14.352168 kubelet[2557]: I1213 01:33:14.352009 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:14.352455 containerd[1444]: time="2024-12-13T01:33:14.352423497Z" level=info msg="StopPodSandbox for \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\"" Dec 13 01:33:14.352578 containerd[1444]: time="2024-12-13T01:33:14.352555703Z" level=info msg="Ensure that sandbox b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a in task-service has been cleanup successfully" Dec 13 01:33:14.354284 kubelet[2557]: I1213 01:33:14.353998 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:14.355481 containerd[1444]: time="2024-12-13T01:33:14.355446767Z" level=info msg="StopPodSandbox for \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\"" Dec 13 01:33:14.355618 containerd[1444]: time="2024-12-13T01:33:14.355589854Z" level=info msg="Ensure that sandbox 4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b in task-service has been cleanup successfully" Dec 13 01:33:14.358600 kubelet[2557]: I1213 01:33:14.358254 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:14.359148 containerd[1444]: time="2024-12-13T01:33:14.359115630Z" level=info msg="StopPodSandbox for \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\"" Dec 13 01:33:14.359285 containerd[1444]: time="2024-12-13T01:33:14.359258837Z" level=info msg="Ensure that sandbox e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e in task-service has been cleanup successfully" Dec 13 01:33:14.359959 kubelet[2557]: I1213 01:33:14.359942 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:14.362602 containerd[1444]: time="2024-12-13T01:33:14.362575363Z" level=info msg="StopPodSandbox for \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\"" Dec 13 01:33:14.363323 containerd[1444]: time="2024-12-13T01:33:14.363105909Z" level=info msg="Ensure that sandbox b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f in task-service has been cleanup successfully" Dec 13 01:33:14.408995 containerd[1444]: time="2024-12-13T01:33:14.408930835Z" level=error msg="StopPodSandbox for \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\" failed" error="failed to destroy network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:14.410433 containerd[1444]: time="2024-12-13T01:33:14.410032769Z" level=error msg="StopPodSandbox for \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\" failed" error="failed to destroy network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:14.412991 kubelet[2557]: E1213 01:33:14.412966 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:14.413212 kubelet[2557]: E1213 01:33:14.413186 2557 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a"} Dec 13 01:33:14.413362 kubelet[2557]: E1213 01:33:14.413310 2557 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd5b119e-695f-466c-85b1-1df84ffeb4f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:14.413362 kubelet[2557]: E1213 01:33:14.413342 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd5b119e-695f-466c-85b1-1df84ffeb4f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rwfq6" podUID="fd5b119e-695f-466c-85b1-1df84ffeb4f8" Dec 13 01:33:14.415065 kubelet[2557]: E1213 01:33:14.414920 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:14.415065 kubelet[2557]: E1213 01:33:14.414966 2557 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0"} Dec 13 01:33:14.415065 kubelet[2557]: E1213 01:33:14.415005 2557 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"616ab489-e9c3-404d-8315-b87df53098e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:14.415065 kubelet[2557]: E1213 01:33:14.415039 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"616ab489-e9c3-404d-8315-b87df53098e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f78qq" podUID="616ab489-e9c3-404d-8315-b87df53098e2" Dec 13 01:33:14.417654 containerd[1444]: time="2024-12-13T01:33:14.417618988Z" level=error msg="StopPodSandbox for \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\" failed" error="failed to destroy network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:14.418692 kubelet[2557]: E1213 01:33:14.418084 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:14.418692 kubelet[2557]: E1213 01:33:14.418121 2557 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e"} Dec 13 01:33:14.418692 kubelet[2557]: E1213 01:33:14.418151 2557 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7417bd6c-d06e-4e11-95c2-509efea5ad02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:14.418692 kubelet[2557]: E1213 01:33:14.418175 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7417bd6c-d06e-4e11-95c2-509efea5ad02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d9556f564-hkcx8" podUID="7417bd6c-d06e-4e11-95c2-509efea5ad02" Dec 13 01:33:14.421052 containerd[1444]: time="2024-12-13T01:33:14.421003037Z" level=error msg="StopPodSandbox for \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\" failed" error="failed to destroy network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:14.421355 kubelet[2557]: E1213 01:33:14.421251 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:14.421355 kubelet[2557]: E1213 01:33:14.421278 2557 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b"} Dec 13 01:33:14.421355 kubelet[2557]: E1213 01:33:14.421310 2557 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14a279e5-ffed-4fd4-a4aa-0fc271831c85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:14.421355 kubelet[2557]: E1213 01:33:14.421334 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14a279e5-ffed-4fd4-a4aa-0fc271831c85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f98xf" podUID="14a279e5-ffed-4fd4-a4aa-0fc271831c85" Dec 13 01:33:14.423971 containerd[1444]: time="2024-12-13T01:33:14.423927022Z" level=error msg="StopPodSandbox for \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\" failed" error="failed to destroy network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:14.424269 kubelet[2557]: E1213 01:33:14.424166 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:14.424269 kubelet[2557]: E1213 01:33:14.424195 2557 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60"} Dec 13 01:33:14.424269 kubelet[2557]: E1213 01:33:14.424223 2557 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c7b6da3-bd72-4f9f-b426-ad3b46174127\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:14.424269 kubelet[2557]: E1213 01:33:14.424246 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c7b6da3-bd72-4f9f-b426-ad3b46174127\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd5945fd6-dcfsq" podUID="4c7b6da3-bd72-4f9f-b426-ad3b46174127" Dec 13 01:33:14.430080 containerd[1444]: time="2024-12-13T01:33:14.430034487Z" level=error msg="StopPodSandbox for \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\" failed" error="failed to destroy network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:14.430356 kubelet[2557]: E1213 01:33:14.430329 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:14.430420 kubelet[2557]: E1213 01:33:14.430365 2557 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f"} Dec 13 01:33:14.430420 kubelet[2557]: E1213 01:33:14.430396 2557 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98996d25-3276-48cb-98f2-b0369f62d55a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:14.430484 kubelet[2557]: E1213 01:33:14.430422 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98996d25-3276-48cb-98f2-b0369f62d55a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d9556f564-qscm5" podUID="98996d25-3276-48cb-98f2-b0369f62d55a" Dec 13 01:33:17.299677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount526200438.mount: Deactivated successfully. Dec 13 01:33:17.529961 containerd[1444]: time="2024-12-13T01:33:17.529909612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:17.530646 containerd[1444]: time="2024-12-13T01:33:17.530613844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:33:17.531927 containerd[1444]: time="2024-12-13T01:33:17.531873502Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:17.533710 containerd[1444]: time="2024-12-13T01:33:17.533659824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:17.534498 containerd[1444]: time="2024-12-13T01:33:17.534166928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.191055485s" Dec 13 01:33:17.534498 containerd[1444]: time="2024-12-13T01:33:17.534208889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:33:17.541105 containerd[1444]: time="2024-12-13T01:33:17.541068685Z" level=info msg="CreateContainer within sandbox \"943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:33:17.554796 containerd[1444]: time="2024-12-13T01:33:17.554601668Z" level=info msg="CreateContainer within sandbox \"943e18c2d5d32155da1df32349f76045948925a5f86e088a825bcb20a3d04f22\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2e979e17087764dfd54fee328caf75a4f5f12da6a496b0b22e5f6ae35a4fcc91\"" Dec 13 01:33:17.555893 containerd[1444]: time="2024-12-13T01:33:17.555852045Z" level=info msg="StartContainer for \"2e979e17087764dfd54fee328caf75a4f5f12da6a496b0b22e5f6ae35a4fcc91\"" Dec 13 01:33:17.605996 systemd[1]: Started cri-containerd-2e979e17087764dfd54fee328caf75a4f5f12da6a496b0b22e5f6ae35a4fcc91.scope - libcontainer container 2e979e17087764dfd54fee328caf75a4f5f12da6a496b0b22e5f6ae35a4fcc91. Dec 13 01:33:17.706683 containerd[1444]: time="2024-12-13T01:33:17.706555340Z" level=info msg="StartContainer for \"2e979e17087764dfd54fee328caf75a4f5f12da6a496b0b22e5f6ae35a4fcc91\" returns successfully" Dec 13 01:33:17.844119 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:33:17.844290 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:33:18.368958 kubelet[2557]: E1213 01:33:18.368919 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:18.392220 kubelet[2557]: I1213 01:33:18.392178 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-ghwq9" podStartSLOduration=2.090940438 podStartE2EDuration="13.392133003s" podCreationTimestamp="2024-12-13 01:33:05 +0000 UTC" firstStartedPulling="2024-12-13 01:33:06.233217214 +0000 UTC m=+25.108533444" lastFinishedPulling="2024-12-13 01:33:17.534409779 +0000 UTC m=+36.409726009" observedRunningTime="2024-12-13 01:33:18.382193557 +0000 UTC m=+37.257509787" watchObservedRunningTime="2024-12-13 01:33:18.392133003 +0000 UTC m=+37.267449233" Dec 13 01:33:18.408460 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:34590.service - OpenSSH per-connection server daemon (10.0.0.1:34590). Dec 13 01:33:18.453342 sshd[3731]: Accepted publickey for core from 10.0.0.1 port 34590 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:18.454656 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:18.458217 systemd-logind[1427]: New session 9 of user core. Dec 13 01:33:18.470982 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:33:18.582428 sshd[3731]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:18.586794 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:34590.service: Deactivated successfully. Dec 13 01:33:18.588544 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:33:18.589279 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:33:18.590312 systemd-logind[1427]: Removed session 9. Dec 13 01:33:19.369701 kubelet[2557]: I1213 01:33:19.369671 2557 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:19.370450 kubelet[2557]: E1213 01:33:19.370429 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:23.277202 kubelet[2557]: I1213 01:33:23.277132 2557 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:23.277872 kubelet[2557]: E1213 01:33:23.277730 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:23.378746 kubelet[2557]: E1213 01:33:23.378712 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:23.608067 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:34340.service - OpenSSH per-connection server daemon (10.0.0.1:34340). Dec 13 01:33:23.647762 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 34340 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:23.649351 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:23.653459 systemd-logind[1427]: New session 10 of user core. Dec 13 01:33:23.666005 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:33:23.831938 sshd[3964]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:23.841806 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:34340.service: Deactivated successfully. Dec 13 01:33:23.843465 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:33:23.846172 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:33:23.857775 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:34350.service - OpenSSH per-connection server daemon (10.0.0.1:34350). Dec 13 01:33:23.859466 systemd-logind[1427]: Removed session 10. Dec 13 01:33:23.890496 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 34350 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:23.891760 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:23.895149 systemd-logind[1427]: New session 11 of user core. Dec 13 01:33:23.908961 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:33:24.086919 sshd[3982]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:24.096265 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:34350.service: Deactivated successfully. Dec 13 01:33:24.098064 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:33:24.099997 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:33:24.118235 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:34352.service - OpenSSH per-connection server daemon (10.0.0.1:34352). Dec 13 01:33:24.120272 systemd-logind[1427]: Removed session 11. Dec 13 01:33:24.164975 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 34352 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:24.166482 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:24.170415 systemd-logind[1427]: New session 12 of user core. Dec 13 01:33:24.188020 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:33:24.305261 sshd[3994]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:24.308839 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:34352.service: Deactivated successfully. Dec 13 01:33:24.310673 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:33:24.311321 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:33:24.312356 systemd-logind[1427]: Removed session 12. Dec 13 01:33:24.441841 kernel: bpftool[4025]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:33:24.609627 systemd-networkd[1389]: vxlan.calico: Link UP Dec 13 01:33:24.609636 systemd-networkd[1389]: vxlan.calico: Gained carrier Dec 13 01:33:25.223172 containerd[1444]: time="2024-12-13T01:33:25.223033922Z" level=info msg="StopPodSandbox for \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\"" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.306 [INFO][4116] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.307 [INFO][4116] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" iface="eth0" netns="/var/run/netns/cni-b9107d33-c4cf-3d4a-e0d6-619cb80787d6" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.308 [INFO][4116] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" iface="eth0" netns="/var/run/netns/cni-b9107d33-c4cf-3d4a-e0d6-619cb80787d6" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.309 [INFO][4116] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" iface="eth0" netns="/var/run/netns/cni-b9107d33-c4cf-3d4a-e0d6-619cb80787d6" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.309 [INFO][4116] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.309 [INFO][4116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.381 [INFO][4124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.382 [INFO][4124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.382 [INFO][4124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.391 [WARNING][4124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.391 [INFO][4124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.392 [INFO][4124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:25.396489 containerd[1444]: 2024-12-13 01:33:25.394 [INFO][4116] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:25.398499 containerd[1444]: time="2024-12-13T01:33:25.397939977Z" level=info msg="TearDown network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\" successfully" Dec 13 01:33:25.398499 containerd[1444]: time="2024-12-13T01:33:25.397989979Z" level=info msg="StopPodSandbox for \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\" returns successfully" Dec 13 01:33:25.398633 systemd[1]: run-netns-cni\x2db9107d33\x2dc4cf\x2d3d4a\x2de0d6\x2d619cb80787d6.mount: Deactivated successfully. Dec 13 01:33:25.400844 kubelet[2557]: E1213 01:33:25.400316 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:25.401682 containerd[1444]: time="2024-12-13T01:33:25.401316707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rwfq6,Uid:fd5b119e-695f-466c-85b1-1df84ffeb4f8,Namespace:kube-system,Attempt:1,}" Dec 13 01:33:25.532609 systemd-networkd[1389]: calie11785907dc: Link UP Dec 13 01:33:25.533488 systemd-networkd[1389]: calie11785907dc: Gained carrier Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.462 [INFO][4139] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--rwfq6-eth0 coredns-76f75df574- kube-system fd5b119e-695f-466c-85b1-1df84ffeb4f8 920 0 2024-12-13 01:32:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-rwfq6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie11785907dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Namespace="kube-system" Pod="coredns-76f75df574-rwfq6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rwfq6-" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.462 [INFO][4139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Namespace="kube-system" Pod="coredns-76f75df574-rwfq6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.490 [INFO][4147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" HandleID="k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.503 [INFO][4147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" HandleID="k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2ec0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-rwfq6", "timestamp":"2024-12-13 01:33:25.49019143 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.503 [INFO][4147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.503 [INFO][4147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.503 [INFO][4147] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.505 [INFO][4147] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.510 [INFO][4147] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.514 [INFO][4147] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.516 [INFO][4147] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.518 [INFO][4147] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.518 [INFO][4147] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.519 [INFO][4147] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448 Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.523 [INFO][4147] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.527 [INFO][4147] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.527 [INFO][4147] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" host="localhost" Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.527 [INFO][4147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:25.548099 containerd[1444]: 2024-12-13 01:33:25.527 [INFO][4147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" HandleID="k8s-pod-network.5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.548741 containerd[1444]: 2024-12-13 01:33:25.530 [INFO][4139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Namespace="kube-system" Pod="coredns-76f75df574-rwfq6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rwfq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rwfq6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd5b119e-695f-466c-85b1-1df84ffeb4f8", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-rwfq6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie11785907dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:25.548741 containerd[1444]: 2024-12-13 01:33:25.530 [INFO][4139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Namespace="kube-system" Pod="coredns-76f75df574-rwfq6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.548741 containerd[1444]: 2024-12-13 01:33:25.530 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie11785907dc ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Namespace="kube-system" Pod="coredns-76f75df574-rwfq6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.548741 containerd[1444]: 2024-12-13 01:33:25.534 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Namespace="kube-system" Pod="coredns-76f75df574-rwfq6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.548741 containerd[1444]: 2024-12-13 01:33:25.535 [INFO][4139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Namespace="kube-system" Pod="coredns-76f75df574-rwfq6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rwfq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rwfq6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd5b119e-695f-466c-85b1-1df84ffeb4f8", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448", Pod:"coredns-76f75df574-rwfq6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie11785907dc", MAC:"4e:95:19:8c:62:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:25.548741 containerd[1444]: 2024-12-13 01:33:25.545 [INFO][4139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448" Namespace="kube-system" Pod="coredns-76f75df574-rwfq6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:25.567866 containerd[1444]: time="2024-12-13T01:33:25.567762634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:25.568081 containerd[1444]: time="2024-12-13T01:33:25.567844518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:25.568081 containerd[1444]: time="2024-12-13T01:33:25.567926081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:25.568240 containerd[1444]: time="2024-12-13T01:33:25.568195811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:25.594020 systemd[1]: Started cri-containerd-5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448.scope - libcontainer container 5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448. Dec 13 01:33:25.606399 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:25.623875 containerd[1444]: time="2024-12-13T01:33:25.623836086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rwfq6,Uid:fd5b119e-695f-466c-85b1-1df84ffeb4f8,Namespace:kube-system,Attempt:1,} returns sandbox id \"5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448\"" Dec 13 01:33:25.624638 kubelet[2557]: E1213 01:33:25.624423 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:25.628142 containerd[1444]: time="2024-12-13T01:33:25.628102451Z" level=info msg="CreateContainer within sandbox \"5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:33:25.656492 containerd[1444]: time="2024-12-13T01:33:25.656338385Z" level=info msg="CreateContainer within sandbox \"5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"484b5860b283388868cc111ae882e349052e0b56bc8bdbfd52433f5e47517020\"" Dec 13 01:33:25.657227 containerd[1444]: time="2024-12-13T01:33:25.657131376Z" level=info msg="StartContainer for \"484b5860b283388868cc111ae882e349052e0b56bc8bdbfd52433f5e47517020\"" Dec 13 01:33:25.688956 systemd[1]: Started cri-containerd-484b5860b283388868cc111ae882e349052e0b56bc8bdbfd52433f5e47517020.scope - libcontainer container 484b5860b283388868cc111ae882e349052e0b56bc8bdbfd52433f5e47517020. Dec 13 01:33:25.711139 containerd[1444]: time="2024-12-13T01:33:25.711086826Z" level=info msg="StartContainer for \"484b5860b283388868cc111ae882e349052e0b56bc8bdbfd52433f5e47517020\" returns successfully" Dec 13 01:33:25.989984 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Dec 13 01:33:26.222635 containerd[1444]: time="2024-12-13T01:33:26.222350959Z" level=info msg="StopPodSandbox for \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\"" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.263 [INFO][4265] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.264 [INFO][4265] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" iface="eth0" netns="/var/run/netns/cni-09d25766-0cbb-bc43-9634-fd5a0d2624bf" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.264 [INFO][4265] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" iface="eth0" netns="/var/run/netns/cni-09d25766-0cbb-bc43-9634-fd5a0d2624bf" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.264 [INFO][4265] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" iface="eth0" netns="/var/run/netns/cni-09d25766-0cbb-bc43-9634-fd5a0d2624bf" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.264 [INFO][4265] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.264 [INFO][4265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.283 [INFO][4272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.283 [INFO][4272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.284 [INFO][4272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.292 [WARNING][4272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.292 [INFO][4272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.294 [INFO][4272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:26.298184 containerd[1444]: 2024-12-13 01:33:26.296 [INFO][4265] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:26.299317 containerd[1444]: time="2024-12-13T01:33:26.299002997Z" level=info msg="TearDown network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\" successfully" Dec 13 01:33:26.299317 containerd[1444]: time="2024-12-13T01:33:26.299043838Z" level=info msg="StopPodSandbox for \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\" returns successfully" Dec 13 01:33:26.299778 containerd[1444]: time="2024-12-13T01:33:26.299756785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9556f564-hkcx8,Uid:7417bd6c-d06e-4e11-95c2-509efea5ad02,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:33:26.384930 kubelet[2557]: E1213 01:33:26.384901 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:26.400455 systemd[1]: run-netns-cni\x2d09d25766\x2d0cbb\x2dbc43\x2d9634\x2dfd5a0d2624bf.mount: Deactivated successfully. Dec 13 01:33:26.408849 kubelet[2557]: I1213 01:33:26.407315 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rwfq6" podStartSLOduration=30.407279438 podStartE2EDuration="30.407279438s" podCreationTimestamp="2024-12-13 01:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:26.407098311 +0000 UTC m=+45.282414541" watchObservedRunningTime="2024-12-13 01:33:26.407279438 +0000 UTC m=+45.282595668" Dec 13 01:33:26.438304 systemd-networkd[1389]: calid589052d8ce: Link UP Dec 13 01:33:26.439257 systemd-networkd[1389]: calid589052d8ce: Gained carrier Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.340 [INFO][4281] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0 calico-apiserver-d9556f564- calico-apiserver 7417bd6c-d06e-4e11-95c2-509efea5ad02 933 0 2024-12-13 01:33:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d9556f564 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d9556f564-hkcx8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid589052d8ce [] []}} ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-hkcx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--hkcx8-" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.340 [INFO][4281] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-hkcx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.368 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" HandleID="k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.379 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" HandleID="k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d84b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d9556f564-hkcx8", "timestamp":"2024-12-13 01:33:26.368524403 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.379 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.379 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.379 [INFO][4296] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.381 [INFO][4296] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.387 [INFO][4296] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.409 [INFO][4296] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.412 [INFO][4296] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.415 [INFO][4296] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.415 [INFO][4296] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.416 [INFO][4296] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.419 [INFO][4296] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.431 [INFO][4296] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.431 [INFO][4296] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" host="localhost" Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.431 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:26.449992 containerd[1444]: 2024-12-13 01:33:26.431 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" HandleID="k8s-pod-network.a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.450961 containerd[1444]: 2024-12-13 01:33:26.434 [INFO][4281] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-hkcx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0", GenerateName:"calico-apiserver-d9556f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"7417bd6c-d06e-4e11-95c2-509efea5ad02", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9556f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d9556f564-hkcx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid589052d8ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:26.450961 containerd[1444]: 2024-12-13 01:33:26.435 [INFO][4281] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-hkcx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.450961 containerd[1444]: 2024-12-13 01:33:26.435 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid589052d8ce ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-hkcx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.450961 containerd[1444]: 2024-12-13 01:33:26.437 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-hkcx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.450961 containerd[1444]: 2024-12-13 01:33:26.437 [INFO][4281] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-hkcx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0", GenerateName:"calico-apiserver-d9556f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"7417bd6c-d06e-4e11-95c2-509efea5ad02", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9556f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f", Pod:"calico-apiserver-d9556f564-hkcx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid589052d8ce", MAC:"ba:eb:45:27:b6:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:26.450961 containerd[1444]: 2024-12-13 01:33:26.446 [INFO][4281] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-hkcx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:26.474065 containerd[1444]: time="2024-12-13T01:33:26.473949495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:26.474284 containerd[1444]: time="2024-12-13T01:33:26.474043019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:26.474284 containerd[1444]: time="2024-12-13T01:33:26.474056659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:26.474284 containerd[1444]: time="2024-12-13T01:33:26.474143462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:26.499014 systemd[1]: Started cri-containerd-a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f.scope - libcontainer container a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f. Dec 13 01:33:26.509561 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:26.531162 containerd[1444]: time="2024-12-13T01:33:26.531119031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9556f564-hkcx8,Uid:7417bd6c-d06e-4e11-95c2-509efea5ad02,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f\"" Dec 13 01:33:26.532616 containerd[1444]: time="2024-12-13T01:33:26.532587087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:33:26.693985 systemd-networkd[1389]: calie11785907dc: Gained IPv6LL Dec 13 01:33:27.222193 containerd[1444]: time="2024-12-13T01:33:27.222134751Z" level=info msg="StopPodSandbox for \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\"" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.264 [INFO][4383] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.264 [INFO][4383] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" iface="eth0" netns="/var/run/netns/cni-d6dd600a-fa86-0781-dfe1-fe21ae8bbf18" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.264 [INFO][4383] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" iface="eth0" netns="/var/run/netns/cni-d6dd600a-fa86-0781-dfe1-fe21ae8bbf18" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.264 [INFO][4383] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" iface="eth0" netns="/var/run/netns/cni-d6dd600a-fa86-0781-dfe1-fe21ae8bbf18" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.264 [INFO][4383] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.264 [INFO][4383] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.284 [INFO][4390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.284 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.284 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.292 [WARNING][4390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.293 [INFO][4390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.294 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:27.300720 containerd[1444]: 2024-12-13 01:33:27.298 [INFO][4383] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:27.301333 containerd[1444]: time="2024-12-13T01:33:27.301033544Z" level=info msg="TearDown network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\" successfully" Dec 13 01:33:27.301333 containerd[1444]: time="2024-12-13T01:33:27.301062265Z" level=info msg="StopPodSandbox for \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\" returns successfully" Dec 13 01:33:27.302633 kubelet[2557]: E1213 01:33:27.302589 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:27.302899 systemd[1]: run-netns-cni\x2dd6dd600a\x2dfa86\x2d0781\x2ddfe1\x2dfe21ae8bbf18.mount: Deactivated successfully. Dec 13 01:33:27.303599 containerd[1444]: time="2024-12-13T01:33:27.303324390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f98xf,Uid:14a279e5-ffed-4fd4-a4aa-0fc271831c85,Namespace:kube-system,Attempt:1,}" Dec 13 01:33:27.389150 kubelet[2557]: E1213 01:33:27.389059 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:27.423931 systemd-networkd[1389]: cali07585eb5b25: Link UP Dec 13 01:33:27.424136 systemd-networkd[1389]: cali07585eb5b25: Gained carrier Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.349 [INFO][4397] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--f98xf-eth0 coredns-76f75df574- kube-system 14a279e5-ffed-4fd4-a4aa-0fc271831c85 950 0 2024-12-13 01:32:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-f98xf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali07585eb5b25 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Namespace="kube-system" Pod="coredns-76f75df574-f98xf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f98xf-" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.349 [INFO][4397] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Namespace="kube-system" Pod="coredns-76f75df574-f98xf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.376 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" HandleID="k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.387 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" HandleID="k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031eae0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-f98xf", "timestamp":"2024-12-13 01:33:27.376349643 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.388 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.388 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.388 [INFO][4411] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.390 [INFO][4411] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.394 [INFO][4411] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.401 [INFO][4411] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.403 [INFO][4411] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.406 [INFO][4411] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.406 [INFO][4411] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.408 [INFO][4411] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.413 [INFO][4411] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.418 [INFO][4411] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.418 [INFO][4411] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" host="localhost" Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.418 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:27.440180 containerd[1444]: 2024-12-13 01:33:27.419 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" HandleID="k8s-pod-network.3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.440700 containerd[1444]: 2024-12-13 01:33:27.420 [INFO][4397] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Namespace="kube-system" Pod="coredns-76f75df574-f98xf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f98xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f98xf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"14a279e5-ffed-4fd4-a4aa-0fc271831c85", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-f98xf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07585eb5b25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:27.440700 containerd[1444]: 2024-12-13 01:33:27.421 [INFO][4397] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Namespace="kube-system" Pod="coredns-76f75df574-f98xf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.440700 containerd[1444]: 2024-12-13 01:33:27.421 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07585eb5b25 ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Namespace="kube-system" Pod="coredns-76f75df574-f98xf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.440700 containerd[1444]: 2024-12-13 01:33:27.422 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Namespace="kube-system" Pod="coredns-76f75df574-f98xf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.440700 containerd[1444]: 2024-12-13 01:33:27.422 [INFO][4397] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Namespace="kube-system" Pod="coredns-76f75df574-f98xf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f98xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f98xf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"14a279e5-ffed-4fd4-a4aa-0fc271831c85", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e", Pod:"coredns-76f75df574-f98xf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07585eb5b25", MAC:"ae:6f:3b:74:bb:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:27.440700 containerd[1444]: 2024-12-13 01:33:27.438 [INFO][4397] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e" Namespace="kube-system" Pod="coredns-76f75df574-f98xf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:27.459723 containerd[1444]: time="2024-12-13T01:33:27.459457474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:27.459723 containerd[1444]: time="2024-12-13T01:33:27.459503275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:27.459723 containerd[1444]: time="2024-12-13T01:33:27.459514436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:27.459723 containerd[1444]: time="2024-12-13T01:33:27.459602799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:27.486974 systemd[1]: Started cri-containerd-3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e.scope - libcontainer container 3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e. Dec 13 01:33:27.498943 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:27.521894 containerd[1444]: time="2024-12-13T01:33:27.521856929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f98xf,Uid:14a279e5-ffed-4fd4-a4aa-0fc271831c85,Namespace:kube-system,Attempt:1,} returns sandbox id \"3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e\"" Dec 13 01:33:27.522735 kubelet[2557]: E1213 01:33:27.522715 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:27.525928 containerd[1444]: time="2024-12-13T01:33:27.525869359Z" level=info msg="CreateContainer within sandbox \"3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:33:27.545047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876325734.mount: Deactivated successfully. Dec 13 01:33:27.549197 containerd[1444]: time="2024-12-13T01:33:27.549152391Z" level=info msg="CreateContainer within sandbox \"3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f50504048ef29ce73f0d9110f5cc23a23f752d2f97d8e63ff3ba1ef9af3b22e6\"" Dec 13 01:33:27.552124 containerd[1444]: time="2024-12-13T01:33:27.552086181Z" level=info msg="StartContainer for \"f50504048ef29ce73f0d9110f5cc23a23f752d2f97d8e63ff3ba1ef9af3b22e6\"" Dec 13 01:33:27.580967 systemd[1]: Started cri-containerd-f50504048ef29ce73f0d9110f5cc23a23f752d2f97d8e63ff3ba1ef9af3b22e6.scope - libcontainer container f50504048ef29ce73f0d9110f5cc23a23f752d2f97d8e63ff3ba1ef9af3b22e6. Dec 13 01:33:27.646722 containerd[1444]: time="2024-12-13T01:33:27.646678321Z" level=info msg="StartContainer for \"f50504048ef29ce73f0d9110f5cc23a23f752d2f97d8e63ff3ba1ef9af3b22e6\" returns successfully" Dec 13 01:33:28.040850 containerd[1444]: time="2024-12-13T01:33:28.040546919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:28.041559 containerd[1444]: time="2024-12-13T01:33:28.041517595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:33:28.043442 containerd[1444]: time="2024-12-13T01:33:28.043288980Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:28.046317 containerd[1444]: time="2024-12-13T01:33:28.046226089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:28.047217 containerd[1444]: time="2024-12-13T01:33:28.047069600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.514446191s" Dec 13 01:33:28.047217 containerd[1444]: time="2024-12-13T01:33:28.047117121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:33:28.048798 containerd[1444]: time="2024-12-13T01:33:28.048769342Z" level=info msg="CreateContainer within sandbox \"a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:33:28.060257 containerd[1444]: time="2024-12-13T01:33:28.060165962Z" level=info msg="CreateContainer within sandbox \"a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"88913c7a7f9d1508e9f470e28feaa46f998f7f58aaa586a64491a30cb78024ca\"" Dec 13 01:33:28.061048 containerd[1444]: time="2024-12-13T01:33:28.060893869Z" level=info msg="StartContainer for \"88913c7a7f9d1508e9f470e28feaa46f998f7f58aaa586a64491a30cb78024ca\"" Dec 13 01:33:28.091991 systemd[1]: Started cri-containerd-88913c7a7f9d1508e9f470e28feaa46f998f7f58aaa586a64491a30cb78024ca.scope - libcontainer container 88913c7a7f9d1508e9f470e28feaa46f998f7f58aaa586a64491a30cb78024ca. Dec 13 01:33:28.123115 containerd[1444]: time="2024-12-13T01:33:28.123077279Z" level=info msg="StartContainer for \"88913c7a7f9d1508e9f470e28feaa46f998f7f58aaa586a64491a30cb78024ca\" returns successfully" Dec 13 01:33:28.166442 systemd-networkd[1389]: calid589052d8ce: Gained IPv6LL Dec 13 01:33:28.394018 kubelet[2557]: E1213 01:33:28.393920 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:28.420186 kubelet[2557]: I1213 01:33:28.420142 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d9556f564-hkcx8" podStartSLOduration=21.905018525 podStartE2EDuration="23.420101581s" podCreationTimestamp="2024-12-13 01:33:05 +0000 UTC" firstStartedPulling="2024-12-13 01:33:26.532257794 +0000 UTC m=+45.407574024" lastFinishedPulling="2024-12-13 01:33:28.04734085 +0000 UTC m=+46.922657080" observedRunningTime="2024-12-13 01:33:28.407224027 +0000 UTC m=+47.282540257" watchObservedRunningTime="2024-12-13 01:33:28.420101581 +0000 UTC m=+47.295417811" Dec 13 01:33:28.613953 systemd-networkd[1389]: cali07585eb5b25: Gained IPv6LL Dec 13 01:33:29.225266 containerd[1444]: time="2024-12-13T01:33:29.225104590Z" level=info msg="StopPodSandbox for \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\"" Dec 13 01:33:29.225972 containerd[1444]: time="2024-12-13T01:33:29.225223515Z" level=info msg="StopPodSandbox for \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\"" Dec 13 01:33:29.227103 containerd[1444]: time="2024-12-13T01:33:29.225225515Z" level=info msg="StopPodSandbox for \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\"" Dec 13 01:33:29.298761 kubelet[2557]: I1213 01:33:29.298678 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f98xf" podStartSLOduration=33.298621898 podStartE2EDuration="33.298621898s" podCreationTimestamp="2024-12-13 01:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:28.422967886 +0000 UTC m=+47.298284116" watchObservedRunningTime="2024-12-13 01:33:29.298621898 +0000 UTC m=+48.173938128" Dec 13 01:33:29.330261 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:34356.service - OpenSSH per-connection server daemon (10.0.0.1:34356). Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.297 [INFO][4618] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.298 [INFO][4618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" iface="eth0" netns="/var/run/netns/cni-cca374d1-13b4-f0ab-5d57-bfc281a5cb77" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.299 [INFO][4618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" iface="eth0" netns="/var/run/netns/cni-cca374d1-13b4-f0ab-5d57-bfc281a5cb77" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.300 [INFO][4618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" iface="eth0" netns="/var/run/netns/cni-cca374d1-13b4-f0ab-5d57-bfc281a5cb77" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.301 [INFO][4618] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.301 [INFO][4618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.339 [INFO][4633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.339 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.339 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.360 [WARNING][4633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.360 [INFO][4633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.363 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:29.380183 containerd[1444]: 2024-12-13 01:33:29.374 [INFO][4618] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:29.382956 systemd[1]: run-netns-cni\x2dcca374d1\x2d13b4\x2df0ab\x2d5d57\x2dbfc281a5cb77.mount: Deactivated successfully. Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.321 [INFO][4608] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.323 [INFO][4608] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" iface="eth0" netns="/var/run/netns/cni-2e8b3bb9-d431-7f1b-bec1-915ac2732cfd" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.323 [INFO][4608] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" iface="eth0" netns="/var/run/netns/cni-2e8b3bb9-d431-7f1b-bec1-915ac2732cfd" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.324 [INFO][4608] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" iface="eth0" netns="/var/run/netns/cni-2e8b3bb9-d431-7f1b-bec1-915ac2732cfd" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.324 [INFO][4608] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.324 [INFO][4608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.356 [INFO][4646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.356 [INFO][4646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.363 [INFO][4646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.376 [WARNING][4646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.376 [INFO][4646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.378 [INFO][4646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:29.383912 containerd[1444]: 2024-12-13 01:33:29.380 [INFO][4608] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:29.387171 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 34356 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:29.388685 containerd[1444]: time="2024-12-13T01:33:29.383883511Z" level=info msg="TearDown network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\" successfully" Dec 13 01:33:29.388685 containerd[1444]: time="2024-12-13T01:33:29.387111348Z" level=info msg="StopPodSandbox for \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\" returns successfully" Dec 13 01:33:29.389593 systemd[1]: run-netns-cni\x2d2e8b3bb9\x2dd431\x2d7f1b\x2dbec1\x2d915ac2732cfd.mount: Deactivated successfully. Dec 13 01:33:29.390671 containerd[1444]: time="2024-12-13T01:33:29.390458790Z" level=info msg="TearDown network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\" successfully" Dec 13 01:33:29.390671 containerd[1444]: time="2024-12-13T01:33:29.390487511Z" level=info msg="StopPodSandbox for \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\" returns successfully" Dec 13 01:33:29.390698 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:29.391086 containerd[1444]: time="2024-12-13T01:33:29.391044691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9556f564-qscm5,Uid:98996d25-3276-48cb-98f2-b0369f62d55a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:33:29.392040 containerd[1444]: time="2024-12-13T01:33:29.391740676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f78qq,Uid:616ab489-e9c3-404d-8315-b87df53098e2,Namespace:calico-system,Attempt:1,}" Dec 13 01:33:29.396651 systemd-logind[1427]: New session 13 of user core. Dec 13 01:33:29.398706 kubelet[2557]: E1213 01:33:29.398678 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:29.403350 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.296 [INFO][4604] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.296 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" iface="eth0" netns="/var/run/netns/cni-4adba7ba-d5bc-9761-658f-fe644a81ebe9" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.297 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" iface="eth0" netns="/var/run/netns/cni-4adba7ba-d5bc-9761-658f-fe644a81ebe9" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.297 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" iface="eth0" netns="/var/run/netns/cni-4adba7ba-d5bc-9761-658f-fe644a81ebe9" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.297 [INFO][4604] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.297 [INFO][4604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.382 [INFO][4632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.382 [INFO][4632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.382 [INFO][4632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.396 [WARNING][4632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.396 [INFO][4632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.399 [INFO][4632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:29.412265 containerd[1444]: 2024-12-13 01:33:29.409 [INFO][4604] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:29.412885 containerd[1444]: time="2024-12-13T01:33:29.412693636Z" level=info msg="TearDown network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\" successfully" Dec 13 01:33:29.412885 containerd[1444]: time="2024-12-13T01:33:29.412725757Z" level=info msg="StopPodSandbox for \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\" returns successfully" Dec 13 01:33:29.414318 systemd[1]: run-netns-cni\x2d4adba7ba\x2dd5bc\x2d9761\x2d658f\x2dfe644a81ebe9.mount: Deactivated successfully. Dec 13 01:33:29.417851 containerd[1444]: time="2024-12-13T01:33:29.416565977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd5945fd6-dcfsq,Uid:4c7b6da3-bd72-4f9f-b426-ad3b46174127,Namespace:calico-system,Attempt:1,}" Dec 13 01:33:29.675133 systemd-networkd[1389]: cali18dbb59cb59: Link UP Dec 13 01:33:29.675315 systemd-networkd[1389]: cali18dbb59cb59: Gained carrier Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.557 [INFO][4668] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--f78qq-eth0 csi-node-driver- calico-system 616ab489-e9c3-404d-8315-b87df53098e2 986 0 2024-12-13 01:33:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-f78qq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18dbb59cb59 [] []}} ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Namespace="calico-system" Pod="csi-node-driver-f78qq" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78qq-" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.558 [INFO][4668] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Namespace="calico-system" Pod="csi-node-driver-f78qq" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.605 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" HandleID="k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.623 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" HandleID="k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002db860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-f78qq", "timestamp":"2024-12-13 01:33:29.602797254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.623 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.623 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.623 [INFO][4717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.626 [INFO][4717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.631 [INFO][4717] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.641 [INFO][4717] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.645 [INFO][4717] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.649 [INFO][4717] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.649 [INFO][4717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.651 [INFO][4717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1 Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.657 [INFO][4717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.668 [INFO][4717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.668 [INFO][4717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" host="localhost" Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.668 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:29.697056 containerd[1444]: 2024-12-13 01:33:29.668 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" HandleID="k8s-pod-network.a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.698000 containerd[1444]: 2024-12-13 01:33:29.672 [INFO][4668] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Namespace="calico-system" Pod="csi-node-driver-f78qq" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f78qq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"616ab489-e9c3-404d-8315-b87df53098e2", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-f78qq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18dbb59cb59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:29.698000 containerd[1444]: 2024-12-13 01:33:29.672 [INFO][4668] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Namespace="calico-system" Pod="csi-node-driver-f78qq" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.698000 containerd[1444]: 2024-12-13 01:33:29.672 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18dbb59cb59 ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Namespace="calico-system" Pod="csi-node-driver-f78qq" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.698000 containerd[1444]: 2024-12-13 01:33:29.674 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Namespace="calico-system" Pod="csi-node-driver-f78qq" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.698000 containerd[1444]: 2024-12-13 01:33:29.675 [INFO][4668] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Namespace="calico-system" Pod="csi-node-driver-f78qq" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f78qq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"616ab489-e9c3-404d-8315-b87df53098e2", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1", Pod:"csi-node-driver-f78qq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18dbb59cb59", MAC:"b6:5a:2c:0a:2a:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:29.698000 containerd[1444]: 2024-12-13 01:33:29.692 [INFO][4668] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1" Namespace="calico-system" Pod="csi-node-driver-f78qq" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:29.712842 sshd[4645]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:29.723792 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:34356.service: Deactivated successfully. Dec 13 01:33:29.727266 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:33:29.728138 containerd[1444]: time="2024-12-13T01:33:29.728060838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:29.728305 containerd[1444]: time="2024-12-13T01:33:29.728276406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:29.728567 containerd[1444]: time="2024-12-13T01:33:29.728534536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:29.728770 containerd[1444]: time="2024-12-13T01:33:29.728738943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:29.730414 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:33:29.740093 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:34364.service - OpenSSH per-connection server daemon (10.0.0.1:34364). Dec 13 01:33:29.741775 systemd-logind[1427]: Removed session 13. Dec 13 01:33:29.747853 systemd[1]: Started cri-containerd-a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1.scope - libcontainer container a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1. Dec 13 01:33:29.758648 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:29.766964 containerd[1444]: time="2024-12-13T01:33:29.766929169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f78qq,Uid:616ab489-e9c3-404d-8315-b87df53098e2,Namespace:calico-system,Attempt:1,} returns sandbox id \"a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1\"" Dec 13 01:33:29.768564 containerd[1444]: time="2024-12-13T01:33:29.768375181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:33:29.773728 sshd[4771]: Accepted publickey for core from 10.0.0.1 port 34364 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:29.775047 sshd[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:29.779010 systemd-logind[1427]: New session 14 of user core. Dec 13 01:33:29.790059 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:33:29.846167 systemd-networkd[1389]: cali4084c6038d7: Link UP Dec 13 01:33:29.847031 systemd-networkd[1389]: cali4084c6038d7: Gained carrier Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.555 [INFO][4693] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0 calico-apiserver-d9556f564- calico-apiserver 98996d25-3276-48cb-98f2-b0369f62d55a 985 0 2024-12-13 01:33:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d9556f564 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d9556f564-qscm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4084c6038d7 [] []}} ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-qscm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--qscm5-" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.555 [INFO][4693] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-qscm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.620 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" HandleID="k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.641 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" HandleID="k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9d40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d9556f564-qscm5", "timestamp":"2024-12-13 01:33:29.620628021 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.641 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.668 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.668 [INFO][4712] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.670 [INFO][4712] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.680 [INFO][4712] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.689 [INFO][4712] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.694 [INFO][4712] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.697 [INFO][4712] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.697 [INFO][4712] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.699 [INFO][4712] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.711 [INFO][4712] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.835 [INFO][4712] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.836 [INFO][4712] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" host="localhost" Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.836 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:29.862785 containerd[1444]: 2024-12-13 01:33:29.836 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" HandleID="k8s-pod-network.87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.863625 containerd[1444]: 2024-12-13 01:33:29.839 [INFO][4693] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-qscm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0", GenerateName:"calico-apiserver-d9556f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"98996d25-3276-48cb-98f2-b0369f62d55a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9556f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d9556f564-qscm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4084c6038d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:29.863625 containerd[1444]: 2024-12-13 01:33:29.840 [INFO][4693] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-qscm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.863625 containerd[1444]: 2024-12-13 01:33:29.840 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4084c6038d7 ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-qscm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.863625 containerd[1444]: 2024-12-13 01:33:29.846 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-qscm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.863625 containerd[1444]: 2024-12-13 01:33:29.847 [INFO][4693] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-qscm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0", GenerateName:"calico-apiserver-d9556f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"98996d25-3276-48cb-98f2-b0369f62d55a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9556f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b", Pod:"calico-apiserver-d9556f564-qscm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4084c6038d7", MAC:"2e:c3:ea:f0:dc:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:29.863625 containerd[1444]: 2024-12-13 01:33:29.856 [INFO][4693] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b" Namespace="calico-apiserver" Pod="calico-apiserver-d9556f564-qscm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:29.895063 systemd-networkd[1389]: cali096546dc758: Link UP Dec 13 01:33:29.895923 systemd-networkd[1389]: cali096546dc758: Gained carrier Dec 13 01:33:29.899311 containerd[1444]: time="2024-12-13T01:33:29.899051402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:29.899311 containerd[1444]: time="2024-12-13T01:33:29.899106764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:29.899311 containerd[1444]: time="2024-12-13T01:33:29.899118445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:29.899311 containerd[1444]: time="2024-12-13T01:33:29.899188887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.555 [INFO][4679] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0 calico-kube-controllers-7fd5945fd6- calico-system 4c7b6da3-bd72-4f9f-b426-ad3b46174127 984 0 2024-12-13 01:33:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fd5945fd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7fd5945fd6-dcfsq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali096546dc758 [] []}} ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Namespace="calico-system" Pod="calico-kube-controllers-7fd5945fd6-dcfsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.555 [INFO][4679] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Namespace="calico-system" Pod="calico-kube-controllers-7fd5945fd6-dcfsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.632 [INFO][4725] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" HandleID="k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.647 [INFO][4725] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" HandleID="k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000416360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7fd5945fd6-dcfsq", "timestamp":"2024-12-13 01:33:29.632707819 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.647 [INFO][4725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.836 [INFO][4725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.836 [INFO][4725] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.839 [INFO][4725] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.849 [INFO][4725] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.861 [INFO][4725] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.863 [INFO][4725] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.868 [INFO][4725] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.868 [INFO][4725] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.869 [INFO][4725] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688 Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.877 [INFO][4725] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.884 [INFO][4725] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.884 [INFO][4725] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" host="localhost" Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.884 [INFO][4725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:29.909911 containerd[1444]: 2024-12-13 01:33:29.884 [INFO][4725] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" HandleID="k8s-pod-network.998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.910483 containerd[1444]: 2024-12-13 01:33:29.887 [INFO][4679] cni-plugin/k8s.go 386: Populated endpoint ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Namespace="calico-system" Pod="calico-kube-controllers-7fd5945fd6-dcfsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0", GenerateName:"calico-kube-controllers-7fd5945fd6-", Namespace:"calico-system", SelfLink:"", UID:"4c7b6da3-bd72-4f9f-b426-ad3b46174127", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd5945fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7fd5945fd6-dcfsq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali096546dc758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:29.910483 containerd[1444]: 2024-12-13 01:33:29.888 [INFO][4679] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Namespace="calico-system" Pod="calico-kube-controllers-7fd5945fd6-dcfsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.910483 containerd[1444]: 2024-12-13 01:33:29.888 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali096546dc758 ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Namespace="calico-system" Pod="calico-kube-controllers-7fd5945fd6-dcfsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.910483 containerd[1444]: 2024-12-13 01:33:29.896 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Namespace="calico-system" Pod="calico-kube-controllers-7fd5945fd6-dcfsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.910483 containerd[1444]: 2024-12-13 01:33:29.897 [INFO][4679] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Namespace="calico-system" Pod="calico-kube-controllers-7fd5945fd6-dcfsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0", GenerateName:"calico-kube-controllers-7fd5945fd6-", Namespace:"calico-system", SelfLink:"", UID:"4c7b6da3-bd72-4f9f-b426-ad3b46174127", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd5945fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688", Pod:"calico-kube-controllers-7fd5945fd6-dcfsq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali096546dc758", MAC:"96:ad:ae:70:21:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:29.910483 containerd[1444]: 2024-12-13 01:33:29.905 [INFO][4679] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688" Namespace="calico-system" Pod="calico-kube-controllers-7fd5945fd6-dcfsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:29.924983 systemd[1]: Started cri-containerd-87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b.scope - libcontainer container 87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b. Dec 13 01:33:29.938675 containerd[1444]: time="2024-12-13T01:33:29.938187942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:29.938675 containerd[1444]: time="2024-12-13T01:33:29.938249865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:29.938675 containerd[1444]: time="2024-12-13T01:33:29.938269505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:29.938675 containerd[1444]: time="2024-12-13T01:33:29.938340188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:29.942871 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:29.959022 systemd[1]: Started cri-containerd-998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688.scope - libcontainer container 998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688. Dec 13 01:33:29.974888 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:29.981701 containerd[1444]: time="2024-12-13T01:33:29.981649679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9556f564-qscm5,Uid:98996d25-3276-48cb-98f2-b0369f62d55a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b\"" Dec 13 01:33:29.987535 containerd[1444]: time="2024-12-13T01:33:29.987424969Z" level=info msg="CreateContainer within sandbox \"87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:33:29.994684 containerd[1444]: time="2024-12-13T01:33:29.993950446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd5945fd6-dcfsq,Uid:4c7b6da3-bd72-4f9f-b426-ad3b46174127,Namespace:calico-system,Attempt:1,} returns sandbox id \"998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688\"" Dec 13 01:33:30.005776 containerd[1444]: time="2024-12-13T01:33:30.005721630Z" level=info msg="CreateContainer within sandbox \"87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d4c0fe1d202c832df4995b9e143bb8c5da3f8845ac5c9b1be5dc837729a459e2\"" Dec 13 01:33:30.006300 containerd[1444]: time="2024-12-13T01:33:30.006224288Z" level=info msg="StartContainer for \"d4c0fe1d202c832df4995b9e143bb8c5da3f8845ac5c9b1be5dc837729a459e2\"" Dec 13 01:33:30.038981 systemd[1]: Started cri-containerd-d4c0fe1d202c832df4995b9e143bb8c5da3f8845ac5c9b1be5dc837729a459e2.scope - libcontainer container d4c0fe1d202c832df4995b9e143bb8c5da3f8845ac5c9b1be5dc837729a459e2. Dec 13 01:33:30.068100 sshd[4771]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:30.076608 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:34364.service: Deactivated successfully. Dec 13 01:33:30.078444 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:33:30.080729 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:33:30.082105 containerd[1444]: time="2024-12-13T01:33:30.082054040Z" level=info msg="StartContainer for \"d4c0fe1d202c832df4995b9e143bb8c5da3f8845ac5c9b1be5dc837729a459e2\" returns successfully" Dec 13 01:33:30.088375 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:34366.service - OpenSSH per-connection server daemon (10.0.0.1:34366). Dec 13 01:33:30.089930 systemd-logind[1427]: Removed session 14. Dec 13 01:33:30.128913 sshd[4955]: Accepted publickey for core from 10.0.0.1 port 34366 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:30.130244 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:30.137069 systemd-logind[1427]: New session 15 of user core. Dec 13 01:33:30.148943 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:33:30.160718 kubelet[2557]: I1213 01:33:30.160689 2557 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:30.161918 kubelet[2557]: E1213 01:33:30.161714 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:30.414643 kubelet[2557]: E1213 01:33:30.414597 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:30.415954 kubelet[2557]: E1213 01:33:30.415892 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:30.429427 kubelet[2557]: I1213 01:33:30.429381 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d9556f564-qscm5" podStartSLOduration=25.429338219 podStartE2EDuration="25.429338219s" podCreationTimestamp="2024-12-13 01:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:30.426856731 +0000 UTC m=+49.302172961" watchObservedRunningTime="2024-12-13 01:33:30.429338219 +0000 UTC m=+49.304654449" Dec 13 01:33:30.816185 containerd[1444]: time="2024-12-13T01:33:30.816138132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:30.817044 containerd[1444]: time="2024-12-13T01:33:30.816674071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:33:30.818125 containerd[1444]: time="2024-12-13T01:33:30.817590944Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:30.820544 containerd[1444]: time="2024-12-13T01:33:30.820507968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:30.821363 containerd[1444]: time="2024-12-13T01:33:30.821334478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.052933336s" Dec 13 01:33:30.821363 containerd[1444]: time="2024-12-13T01:33:30.821363959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:33:30.822036 containerd[1444]: time="2024-12-13T01:33:30.822008502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:33:30.825194 containerd[1444]: time="2024-12-13T01:33:30.825164695Z" level=info msg="CreateContainer within sandbox \"a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:33:30.848309 containerd[1444]: time="2024-12-13T01:33:30.848252201Z" level=info msg="CreateContainer within sandbox \"a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"476c6ed0fc0e7feb4f06925cc30e88f86c9650781270ba1362e87c3fdea37551\"" Dec 13 01:33:30.850145 containerd[1444]: time="2024-12-13T01:33:30.849017828Z" level=info msg="StartContainer for \"476c6ed0fc0e7feb4f06925cc30e88f86c9650781270ba1362e87c3fdea37551\"" Dec 13 01:33:30.856469 systemd-networkd[1389]: cali4084c6038d7: Gained IPv6LL Dec 13 01:33:30.884996 systemd[1]: Started cri-containerd-476c6ed0fc0e7feb4f06925cc30e88f86c9650781270ba1362e87c3fdea37551.scope - libcontainer container 476c6ed0fc0e7feb4f06925cc30e88f86c9650781270ba1362e87c3fdea37551. Dec 13 01:33:30.918548 containerd[1444]: time="2024-12-13T01:33:30.917570680Z" level=info msg="StartContainer for \"476c6ed0fc0e7feb4f06925cc30e88f86c9650781270ba1362e87c3fdea37551\" returns successfully" Dec 13 01:33:31.365940 systemd-networkd[1389]: cali096546dc758: Gained IPv6LL Dec 13 01:33:31.419827 kubelet[2557]: I1213 01:33:31.419775 2557 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:31.627206 systemd-networkd[1389]: cali18dbb59cb59: Gained IPv6LL Dec 13 01:33:31.797928 sshd[4955]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:31.808517 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:34366.service: Deactivated successfully. Dec 13 01:33:31.812531 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:33:31.815510 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:33:31.828166 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:34378.service - OpenSSH per-connection server daemon (10.0.0.1:34378). Dec 13 01:33:31.829076 systemd-logind[1427]: Removed session 15. Dec 13 01:33:31.872450 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 34378 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:31.873272 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:31.879447 systemd-logind[1427]: New session 16 of user core. Dec 13 01:33:31.889993 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:33:32.287359 sshd[5073]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:32.295666 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:34378.service: Deactivated successfully. Dec 13 01:33:32.300437 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:33:32.302543 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:33:32.310093 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:34388.service - OpenSSH per-connection server daemon (10.0.0.1:34388). Dec 13 01:33:32.310957 systemd-logind[1427]: Removed session 16. Dec 13 01:33:32.351731 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 34388 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:32.353119 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:32.357331 systemd-logind[1427]: New session 17 of user core. Dec 13 01:33:32.365041 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:33:32.374850 containerd[1444]: time="2024-12-13T01:33:32.374798792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.375629 containerd[1444]: time="2024-12-13T01:33:32.375450735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:33:32.376257 containerd[1444]: time="2024-12-13T01:33:32.376223042Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.379843 containerd[1444]: time="2024-12-13T01:33:32.379517756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.380508 containerd[1444]: time="2024-12-13T01:33:32.380476630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.557847586s" Dec 13 01:33:32.380605 containerd[1444]: time="2024-12-13T01:33:32.380589834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:33:32.381423 containerd[1444]: time="2024-12-13T01:33:32.381397582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:33:32.388527 containerd[1444]: time="2024-12-13T01:33:32.388340544Z" level=info msg="CreateContainer within sandbox \"998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:33:32.404289 containerd[1444]: time="2024-12-13T01:33:32.404244457Z" level=info msg="CreateContainer within sandbox \"998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6428122cb8f29242e173f6d8b3154288698aa715760763a161355b9cc7ac0e0c\"" Dec 13 01:33:32.405158 containerd[1444]: time="2024-12-13T01:33:32.405002964Z" level=info msg="StartContainer for \"6428122cb8f29242e173f6d8b3154288698aa715760763a161355b9cc7ac0e0c\"" Dec 13 01:33:32.438973 systemd[1]: Started cri-containerd-6428122cb8f29242e173f6d8b3154288698aa715760763a161355b9cc7ac0e0c.scope - libcontainer container 6428122cb8f29242e173f6d8b3154288698aa715760763a161355b9cc7ac0e0c. Dec 13 01:33:32.469169 containerd[1444]: time="2024-12-13T01:33:32.469123276Z" level=info msg="StartContainer for \"6428122cb8f29242e173f6d8b3154288698aa715760763a161355b9cc7ac0e0c\" returns successfully" Dec 13 01:33:32.497170 sshd[5092]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:32.500971 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:34388.service: Deactivated successfully. Dec 13 01:33:32.502694 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:33:32.503355 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:33:32.505423 systemd-logind[1427]: Removed session 17. Dec 13 01:33:33.441373 kubelet[2557]: I1213 01:33:33.441327 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fd5945fd6-dcfsq" podStartSLOduration=26.056082041 podStartE2EDuration="28.441286656s" podCreationTimestamp="2024-12-13 01:33:05 +0000 UTC" firstStartedPulling="2024-12-13 01:33:29.995806913 +0000 UTC m=+48.871123103" lastFinishedPulling="2024-12-13 01:33:32.381011488 +0000 UTC m=+51.256327718" observedRunningTime="2024-12-13 01:33:33.440123776 +0000 UTC m=+52.315440006" watchObservedRunningTime="2024-12-13 01:33:33.441286656 +0000 UTC m=+52.316602846" Dec 13 01:33:33.545234 containerd[1444]: time="2024-12-13T01:33:33.545178628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:33.546271 containerd[1444]: time="2024-12-13T01:33:33.546236505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:33:33.547493 containerd[1444]: time="2024-12-13T01:33:33.547269580Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:33.549416 containerd[1444]: time="2024-12-13T01:33:33.549344932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:33.550142 containerd[1444]: time="2024-12-13T01:33:33.550116398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.168593932s" Dec 13 01:33:33.550309 containerd[1444]: time="2024-12-13T01:33:33.550216522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:33:33.558209 containerd[1444]: time="2024-12-13T01:33:33.558122194Z" level=info msg="CreateContainer within sandbox \"a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:33:33.572136 containerd[1444]: time="2024-12-13T01:33:33.572101914Z" level=info msg="CreateContainer within sandbox \"a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"25939984d8564e0b95ff8a11c98e151c56d527aa027c98eaa50a3fc45a712308\"" Dec 13 01:33:33.572550 containerd[1444]: time="2024-12-13T01:33:33.572474647Z" level=info msg="StartContainer for \"25939984d8564e0b95ff8a11c98e151c56d527aa027c98eaa50a3fc45a712308\"" Dec 13 01:33:33.599965 systemd[1]: Started cri-containerd-25939984d8564e0b95ff8a11c98e151c56d527aa027c98eaa50a3fc45a712308.scope - libcontainer container 25939984d8564e0b95ff8a11c98e151c56d527aa027c98eaa50a3fc45a712308. Dec 13 01:33:33.622823 containerd[1444]: time="2024-12-13T01:33:33.622770457Z" level=info msg="StartContainer for \"25939984d8564e0b95ff8a11c98e151c56d527aa027c98eaa50a3fc45a712308\" returns successfully" Dec 13 01:33:34.319105 kubelet[2557]: I1213 01:33:34.319062 2557 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:33:34.322667 kubelet[2557]: I1213 01:33:34.322643 2557 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:33:34.443837 kubelet[2557]: I1213 01:33:34.443774 2557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-f78qq" podStartSLOduration=25.661294029 podStartE2EDuration="29.44371563s" podCreationTimestamp="2024-12-13 01:33:05 +0000 UTC" firstStartedPulling="2024-12-13 01:33:29.768099691 +0000 UTC m=+48.643415921" lastFinishedPulling="2024-12-13 01:33:33.550521292 +0000 UTC m=+52.425837522" observedRunningTime="2024-12-13 01:33:34.442708236 +0000 UTC m=+53.318024466" watchObservedRunningTime="2024-12-13 01:33:34.44371563 +0000 UTC m=+53.319031860" Dec 13 01:33:37.509352 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:41468.service - OpenSSH per-connection server daemon (10.0.0.1:41468). Dec 13 01:33:37.556114 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 41468 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:37.559746 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:37.567255 systemd-logind[1427]: New session 18 of user core. Dec 13 01:33:37.580039 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:33:37.748553 sshd[5223]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:37.752171 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:41468.service: Deactivated successfully. Dec 13 01:33:37.754718 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:33:37.756565 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:33:37.757388 systemd-logind[1427]: Removed session 18. Dec 13 01:33:41.246842 containerd[1444]: time="2024-12-13T01:33:41.246730293Z" level=info msg="StopPodSandbox for \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\"" Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.282 [WARNING][5255] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0", GenerateName:"calico-kube-controllers-7fd5945fd6-", Namespace:"calico-system", SelfLink:"", UID:"4c7b6da3-bd72-4f9f-b426-ad3b46174127", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd5945fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688", Pod:"calico-kube-controllers-7fd5945fd6-dcfsq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali096546dc758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.282 [INFO][5255] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.283 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" iface="eth0" netns="" Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.283 [INFO][5255] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.283 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.304 [INFO][5262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.304 [INFO][5262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.304 [INFO][5262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.312 [WARNING][5262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.312 [INFO][5262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.313 [INFO][5262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.316839 containerd[1444]: 2024-12-13 01:33:41.315 [INFO][5255] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:41.316839 containerd[1444]: time="2024-12-13T01:33:41.316801001Z" level=info msg="TearDown network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\" successfully" Dec 13 01:33:41.316839 containerd[1444]: time="2024-12-13T01:33:41.316844162Z" level=info msg="StopPodSandbox for \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\" returns successfully" Dec 13 01:33:41.317398 containerd[1444]: time="2024-12-13T01:33:41.317234375Z" level=info msg="RemovePodSandbox for \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\"" Dec 13 01:33:41.325930 containerd[1444]: time="2024-12-13T01:33:41.325886290Z" level=info msg="Forcibly stopping sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\"" Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.362 [WARNING][5284] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0", GenerateName:"calico-kube-controllers-7fd5945fd6-", Namespace:"calico-system", SelfLink:"", UID:"4c7b6da3-bd72-4f9f-b426-ad3b46174127", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd5945fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"998d95aa04cdca984413c4793e2213b7c75502866078cf74a3ff29afdccc8688", Pod:"calico-kube-controllers-7fd5945fd6-dcfsq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali096546dc758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.362 [INFO][5284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.362 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" iface="eth0" netns="" Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.362 [INFO][5284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.362 [INFO][5284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.381 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.381 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.381 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.389 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.389 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" HandleID="k8s-pod-network.27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Workload="localhost-k8s-calico--kube--controllers--7fd5945fd6--dcfsq-eth0" Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.390 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.394607 containerd[1444]: 2024-12-13 01:33:41.393 [INFO][5284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60" Dec 13 01:33:41.395022 containerd[1444]: time="2024-12-13T01:33:41.394643236Z" level=info msg="TearDown network for sandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\" successfully" Dec 13 01:33:41.438067 containerd[1444]: time="2024-12-13T01:33:41.438018575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:41.438168 containerd[1444]: time="2024-12-13T01:33:41.438087817Z" level=info msg="RemovePodSandbox \"27c1980e72fd94912649c06eb672cf2928715d4d8c5fd0a59db4d051dd893f60\" returns successfully" Dec 13 01:33:41.438829 containerd[1444]: time="2024-12-13T01:33:41.438493110Z" level=info msg="StopPodSandbox for \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\"" Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.474 [WARNING][5314] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0", GenerateName:"calico-apiserver-d9556f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"98996d25-3276-48cb-98f2-b0369f62d55a", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9556f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b", Pod:"calico-apiserver-d9556f564-qscm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4084c6038d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.475 [INFO][5314] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.475 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" iface="eth0" netns="" Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.475 [INFO][5314] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.475 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.495 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.496 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.496 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.505 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.505 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.507 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.510015 containerd[1444]: 2024-12-13 01:33:41.508 [INFO][5314] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:41.510015 containerd[1444]: time="2024-12-13T01:33:41.509883940Z" level=info msg="TearDown network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\" successfully" Dec 13 01:33:41.510015 containerd[1444]: time="2024-12-13T01:33:41.509908301Z" level=info msg="StopPodSandbox for \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\" returns successfully" Dec 13 01:33:41.511886 containerd[1444]: time="2024-12-13T01:33:41.511653996Z" level=info msg="RemovePodSandbox for \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\"" Dec 13 01:33:41.511886 containerd[1444]: time="2024-12-13T01:33:41.511683877Z" level=info msg="Forcibly stopping sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\"" Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.550 [WARNING][5345] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0", GenerateName:"calico-apiserver-d9556f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"98996d25-3276-48cb-98f2-b0369f62d55a", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9556f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87fb3e1251a8c6ac832587ceb49493501446fe16e1b6c5eabf1259f7a84a3e3b", Pod:"calico-apiserver-d9556f564-qscm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4084c6038d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.551 [INFO][5345] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.551 [INFO][5345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" iface="eth0" netns="" Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.551 [INFO][5345] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.551 [INFO][5345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.572 [INFO][5352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.572 [INFO][5352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.572 [INFO][5352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.580 [WARNING][5352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.580 [INFO][5352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" HandleID="k8s-pod-network.b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Workload="localhost-k8s-calico--apiserver--d9556f564--qscm5-eth0" Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.581 [INFO][5352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.584492 containerd[1444]: 2024-12-13 01:33:41.583 [INFO][5345] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f" Dec 13 01:33:41.584904 containerd[1444]: time="2024-12-13T01:33:41.584539874Z" level=info msg="TearDown network for sandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\" successfully" Dec 13 01:33:41.587110 containerd[1444]: time="2024-12-13T01:33:41.587057434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:41.587174 containerd[1444]: time="2024-12-13T01:33:41.587127116Z" level=info msg="RemovePodSandbox \"b47cc38580f7fa65992edabae074323e10bfd53c72e9a8151438a1a43bba314f\" returns successfully" Dec 13 01:33:41.587599 containerd[1444]: time="2024-12-13T01:33:41.587575651Z" level=info msg="StopPodSandbox for \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\"" Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.620 [WARNING][5374] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0", GenerateName:"calico-apiserver-d9556f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"7417bd6c-d06e-4e11-95c2-509efea5ad02", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9556f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f", Pod:"calico-apiserver-d9556f564-hkcx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid589052d8ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.620 [INFO][5374] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.620 [INFO][5374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" iface="eth0" netns="" Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.620 [INFO][5374] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.620 [INFO][5374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.639 [INFO][5381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.639 [INFO][5381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.639 [INFO][5381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.647 [WARNING][5381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.647 [INFO][5381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.648 [INFO][5381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.650988 containerd[1444]: 2024-12-13 01:33:41.649 [INFO][5374] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:41.651406 containerd[1444]: time="2024-12-13T01:33:41.651030108Z" level=info msg="TearDown network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\" successfully" Dec 13 01:33:41.651406 containerd[1444]: time="2024-12-13T01:33:41.651061189Z" level=info msg="StopPodSandbox for \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\" returns successfully" Dec 13 01:33:41.651853 containerd[1444]: time="2024-12-13T01:33:41.651807533Z" level=info msg="RemovePodSandbox for \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\"" Dec 13 01:33:41.651903 containerd[1444]: time="2024-12-13T01:33:41.651860775Z" level=info msg="Forcibly stopping sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\"" Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.685 [WARNING][5404] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0", GenerateName:"calico-apiserver-d9556f564-", Namespace:"calico-apiserver", SelfLink:"", UID:"7417bd6c-d06e-4e11-95c2-509efea5ad02", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9556f564", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a671281be0e59e581afb049553e2ba77c0e27d1225343b14d025e15ca9614d8f", Pod:"calico-apiserver-d9556f564-hkcx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid589052d8ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.685 [INFO][5404] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.685 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" iface="eth0" netns="" Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.685 [INFO][5404] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.685 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.704 [INFO][5412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.704 [INFO][5412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.704 [INFO][5412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.713 [WARNING][5412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.713 [INFO][5412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" HandleID="k8s-pod-network.e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Workload="localhost-k8s-calico--apiserver--d9556f564--hkcx8-eth0" Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.714 [INFO][5412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.717540 containerd[1444]: 2024-12-13 01:33:41.716 [INFO][5404] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e" Dec 13 01:33:41.717944 containerd[1444]: time="2024-12-13T01:33:41.717579384Z" level=info msg="TearDown network for sandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\" successfully" Dec 13 01:33:41.720157 containerd[1444]: time="2024-12-13T01:33:41.720124825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:41.720205 containerd[1444]: time="2024-12-13T01:33:41.720183267Z" level=info msg="RemovePodSandbox \"e2280bfa2e7a43cda63ef0a9569724a6aceeaba71b17d30b4ff629813c776b9e\" returns successfully" Dec 13 01:33:41.720656 containerd[1444]: time="2024-12-13T01:33:41.720634521Z" level=info msg="StopPodSandbox for \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\"" Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.755 [WARNING][5434] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f78qq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"616ab489-e9c3-404d-8315-b87df53098e2", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1", Pod:"csi-node-driver-f78qq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18dbb59cb59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.755 [INFO][5434] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.755 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" iface="eth0" netns="" Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.755 [INFO][5434] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.755 [INFO][5434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.774 [INFO][5442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.774 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.774 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.786 [WARNING][5442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.786 [INFO][5442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.788 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.795235 containerd[1444]: 2024-12-13 01:33:41.790 [INFO][5434] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:41.795235 containerd[1444]: time="2024-12-13T01:33:41.795207533Z" level=info msg="TearDown network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\" successfully" Dec 13 01:33:41.795634 containerd[1444]: time="2024-12-13T01:33:41.795232333Z" level=info msg="StopPodSandbox for \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\" returns successfully" Dec 13 01:33:41.796518 containerd[1444]: time="2024-12-13T01:33:41.796387970Z" level=info msg="RemovePodSandbox for \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\"" Dec 13 01:33:41.796834 containerd[1444]: time="2024-12-13T01:33:41.796603977Z" level=info msg="Forcibly stopping sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\"" Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.835 [WARNING][5464] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f78qq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"616ab489-e9c3-404d-8315-b87df53098e2", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a04e59c13ca649df7edd398ac23b34935dd00176b2afb5aab9f02e2b724cbcb1", Pod:"csi-node-driver-f78qq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18dbb59cb59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.835 [INFO][5464] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.835 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" iface="eth0" netns="" Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.835 [INFO][5464] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.835 [INFO][5464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.853 [INFO][5473] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.853 [INFO][5473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.853 [INFO][5473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.863 [WARNING][5473] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.863 [INFO][5473] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" HandleID="k8s-pod-network.b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Workload="localhost-k8s-csi--node--driver--f78qq-eth0" Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.864 [INFO][5473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.867294 containerd[1444]: 2024-12-13 01:33:41.865 [INFO][5464] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0" Dec 13 01:33:41.867852 containerd[1444]: time="2024-12-13T01:33:41.867325546Z" level=info msg="TearDown network for sandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\" successfully" Dec 13 01:33:41.869900 containerd[1444]: time="2024-12-13T01:33:41.869861666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:41.869941 containerd[1444]: time="2024-12-13T01:33:41.869923708Z" level=info msg="RemovePodSandbox \"b121e35746b1350f74d90263185605db9f427569301109de2a624ea65061acc0\" returns successfully" Dec 13 01:33:41.870363 containerd[1444]: time="2024-12-13T01:33:41.870330921Z" level=info msg="StopPodSandbox for \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\"" Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.903 [WARNING][5495] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rwfq6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd5b119e-695f-466c-85b1-1df84ffeb4f8", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448", Pod:"coredns-76f75df574-rwfq6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie11785907dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.903 [INFO][5495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.904 [INFO][5495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" iface="eth0" netns="" Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.904 [INFO][5495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.904 [INFO][5495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.922 [INFO][5503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.922 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.922 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.929 [WARNING][5503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.929 [INFO][5503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.931 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.934033 containerd[1444]: 2024-12-13 01:33:41.932 [INFO][5495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:41.934457 containerd[1444]: time="2024-12-13T01:33:41.934169231Z" level=info msg="TearDown network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\" successfully" Dec 13 01:33:41.934457 containerd[1444]: time="2024-12-13T01:33:41.934210712Z" level=info msg="StopPodSandbox for \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\" returns successfully" Dec 13 01:33:41.934938 containerd[1444]: time="2024-12-13T01:33:41.934654486Z" level=info msg="RemovePodSandbox for \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\"" Dec 13 01:33:41.934938 containerd[1444]: time="2024-12-13T01:33:41.934689048Z" level=info msg="Forcibly stopping sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\"" Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.965 [WARNING][5525] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rwfq6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd5b119e-695f-466c-85b1-1df84ffeb4f8", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bb5eabd7bd0a4c289b7fdfe4ce239385a1788e67871e7ed2961c39e6046a448", Pod:"coredns-76f75df574-rwfq6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie11785907dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.965 [INFO][5525] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.965 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" iface="eth0" netns="" Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.965 [INFO][5525] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.965 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.982 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.983 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.983 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.991 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.991 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" HandleID="k8s-pod-network.b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Workload="localhost-k8s-coredns--76f75df574--rwfq6-eth0" Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.993 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:41.996890 containerd[1444]: 2024-12-13 01:33:41.995 [INFO][5525] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a" Dec 13 01:33:41.997282 containerd[1444]: time="2024-12-13T01:33:41.996928867Z" level=info msg="TearDown network for sandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\" successfully" Dec 13 01:33:41.999644 containerd[1444]: time="2024-12-13T01:33:41.999604576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:41.999702 containerd[1444]: time="2024-12-13T01:33:41.999662257Z" level=info msg="RemovePodSandbox \"b7dcd408c4f41fa8a717f4c325d95908087b3b3a6410435fac0971513fc5b75a\" returns successfully" Dec 13 01:33:42.000578 containerd[1444]: time="2024-12-13T01:33:42.000315434Z" level=info msg="StopPodSandbox for \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\"" Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.037 [WARNING][5556] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f98xf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"14a279e5-ffed-4fd4-a4aa-0fc271831c85", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e", Pod:"coredns-76f75df574-f98xf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07585eb5b25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.037 [INFO][5556] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.037 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" iface="eth0" netns="" Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.037 [INFO][5556] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.037 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.055 [INFO][5564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.055 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.055 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.063 [WARNING][5564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.063 [INFO][5564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.065 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:42.067858 containerd[1444]: 2024-12-13 01:33:42.066 [INFO][5556] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:42.067858 containerd[1444]: time="2024-12-13T01:33:42.067830821Z" level=info msg="TearDown network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\" successfully" Dec 13 01:33:42.067858 containerd[1444]: time="2024-12-13T01:33:42.067855820Z" level=info msg="StopPodSandbox for \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\" returns successfully" Dec 13 01:33:42.069499 containerd[1444]: time="2024-12-13T01:33:42.068319168Z" level=info msg="RemovePodSandbox for \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\"" Dec 13 01:33:42.069499 containerd[1444]: time="2024-12-13T01:33:42.068351447Z" level=info msg="Forcibly stopping sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\"" Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.101 [WARNING][5586] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f98xf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"14a279e5-ffed-4fd4-a4aa-0fc271831c85", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3578ed51142b70f6a83233fc1c01baac8c3d58fcc1ff81822ab122fa6534d01e", Pod:"coredns-76f75df574-f98xf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07585eb5b25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.102 [INFO][5586] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.102 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" iface="eth0" netns="" Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.102 [INFO][5586] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.102 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.125 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.125 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.125 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.133 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.133 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" HandleID="k8s-pod-network.4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Workload="localhost-k8s-coredns--76f75df574--f98xf-eth0" Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.135 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:42.139158 containerd[1444]: 2024-12-13 01:33:42.136 [INFO][5586] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b" Dec 13 01:33:42.139158 containerd[1444]: time="2024-12-13T01:33:42.138034190Z" level=info msg="TearDown network for sandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\" successfully" Dec 13 01:33:42.145043 containerd[1444]: time="2024-12-13T01:33:42.145005644Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:42.145170 containerd[1444]: time="2024-12-13T01:33:42.145153521Z" level=info msg="RemovePodSandbox \"4955b02e1457118e8633dd6c8a5802b1901118578fd5a31d7a20b8f04460eb0b\" returns successfully" Dec 13 01:33:42.760353 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:55818.service - OpenSSH per-connection server daemon (10.0.0.1:55818). Dec 13 01:33:42.804769 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 55818 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:42.805963 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:42.809908 systemd-logind[1427]: New session 19 of user core. Dec 13 01:33:42.821969 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:33:42.986517 sshd[5601]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:42.989872 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:55818.service: Deactivated successfully. Dec 13 01:33:42.992420 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:33:42.993253 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:33:42.994300 systemd-logind[1427]: Removed session 19. Dec 13 01:33:47.996435 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:55826.service - OpenSSH per-connection server daemon (10.0.0.1:55826). Dec 13 01:33:48.035945 sshd[5641]: Accepted publickey for core from 10.0.0.1 port 55826 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:48.036470 sshd[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:48.040057 systemd-logind[1427]: New session 20 of user core. Dec 13 01:33:48.054023 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:33:48.204340 sshd[5641]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:48.207168 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:55826.service: Deactivated successfully. Dec 13 01:33:48.208888 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:33:48.210511 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:33:48.211511 systemd-logind[1427]: Removed session 20. Dec 13 01:33:53.215120 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:50488.service - OpenSSH per-connection server daemon (10.0.0.1:50488). Dec 13 01:33:53.261024 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 50488 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:53.261304 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:53.269290 systemd-logind[1427]: New session 21 of user core. Dec 13 01:33:53.275006 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:33:53.411913 sshd[5658]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:53.414920 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:50488.service: Deactivated successfully. Dec 13 01:33:53.416635 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:33:53.417296 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:33:53.418066 systemd-logind[1427]: Removed session 21. Dec 13 01:33:54.222465 kubelet[2557]: E1213 01:33:54.222419 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"