Dec 13 01:25:36.896732 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:25:36.896754 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:25:36.896765 kernel: KASLR enabled Dec 13 01:25:36.896770 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:36.896776 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:25:36.896782 kernel: random: crng init done Dec 13 01:25:36.896789 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:36.896795 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:25:36.896801 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:25:36.896809 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896815 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896821 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896827 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896833 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896840 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896848 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896855 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896861 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:25:36.896868 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:25:36.896874 kernel: NUMA: Failed to initialise from firmware Dec 13 01:25:36.896881 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:25:36.896887 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Dec 13 01:25:36.896893 kernel: Zone ranges: Dec 13 01:25:36.896900 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:25:36.896906 kernel: DMA32 empty Dec 13 01:25:36.896913 kernel: Normal empty Dec 13 01:25:36.896920 kernel: Movable zone start for each node Dec 13 01:25:36.896926 kernel: Early memory node ranges Dec 13 01:25:36.896932 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:25:36.896939 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:25:36.896945 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:25:36.896951 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:25:36.896958 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:25:36.896964 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:25:36.896970 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:25:36.896977 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:25:36.896983 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:25:36.896991 kernel: psci: probing for conduit method from ACPI. Dec 13 01:25:36.896997 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:25:36.897004 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:25:36.897013 kernel: psci: Trusted OS migration not required Dec 13 01:25:36.897019 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:25:36.897026 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:25:36.897034 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:25:36.897041 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:25:36.897048 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:25:36.897055 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:25:36.897062 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:25:36.897069 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:25:36.897075 kernel: CPU features: detected: Spectre-v4 Dec 13 01:25:36.897082 kernel: CPU features: detected: Spectre-BHB Dec 13 01:25:36.897089 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:25:36.897096 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:25:36.897103 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:25:36.897110 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:25:36.897117 kernel: alternatives: applying boot alternatives Dec 13 01:25:36.897125 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:36.897132 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:36.897139 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:36.897146 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:25:36.897153 kernel: Fallback order for Node 0: 0 Dec 13 01:25:36.897159 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:25:36.897166 kernel: Policy zone: DMA Dec 13 01:25:36.897173 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:36.897180 kernel: software IO TLB: area num 4. Dec 13 01:25:36.897187 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:25:36.897194 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Dec 13 01:25:36.897201 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:25:36.897208 kernel: trace event string verifier disabled Dec 13 01:25:36.897215 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:36.897222 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:36.897229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:25:36.897236 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:36.897243 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:36.897251 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:36.897258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:25:36.897265 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:25:36.897272 kernel: GICv3: 256 SPIs implemented Dec 13 01:25:36.897279 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:25:36.897286 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:25:36.897292 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:25:36.897299 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:25:36.897306 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:25:36.897313 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:25:36.897331 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:25:36.897338 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:25:36.897345 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:25:36.897353 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:36.897360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:36.897367 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:25:36.897374 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:25:36.897381 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:25:36.897388 kernel: arm-pv: using stolen time PV Dec 13 01:25:36.897395 kernel: Console: colour dummy device 80x25 Dec 13 01:25:36.897402 kernel: ACPI: Core revision 20230628 Dec 13 01:25:36.897410 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:25:36.897417 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:36.897425 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:36.897432 kernel: landlock: Up and running. Dec 13 01:25:36.897447 kernel: SELinux: Initializing. Dec 13 01:25:36.897454 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:36.897461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:36.897468 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:25:36.897475 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:25:36.897482 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:36.897489 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:36.897498 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:25:36.897505 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:25:36.897512 kernel: Remapping and enabling EFI services. Dec 13 01:25:36.897519 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:36.897526 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:25:36.897533 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:25:36.897540 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:25:36.897547 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:36.897554 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:25:36.897561 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:25:36.897569 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:25:36.897576 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:25:36.897588 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:36.897596 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:25:36.897603 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:25:36.897629 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:25:36.897637 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:25:36.897644 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:36.897652 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:25:36.897661 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:25:36.897668 kernel: SMP: Total of 4 processors activated. Dec 13 01:25:36.897676 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:25:36.897683 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:25:36.897691 kernel: CPU features: detected: Common not Private translations Dec 13 01:25:36.897698 kernel: CPU features: detected: CRC32 instructions Dec 13 01:25:36.897705 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:25:36.897713 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:25:36.897721 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:25:36.897728 kernel: CPU features: detected: Privileged Access Never Dec 13 01:25:36.897736 kernel: CPU features: detected: RAS Extension Support Dec 13 01:25:36.897743 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:25:36.897750 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:25:36.897758 kernel: alternatives: applying system-wide alternatives Dec 13 01:25:36.897765 kernel: devtmpfs: initialized Dec 13 01:25:36.897772 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:36.897780 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:25:36.897788 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:36.897795 kernel: SMBIOS 3.0.0 present. Dec 13 01:25:36.897803 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:25:36.897810 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:36.897817 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:25:36.897825 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:25:36.897832 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:25:36.897840 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:36.897847 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:36.897855 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:36.897863 kernel: cpuidle: using governor menu Dec 13 01:25:36.897870 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:25:36.897877 kernel: ASID allocator initialised with 32768 entries Dec 13 01:25:36.897885 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:36.897892 kernel: Serial: AMBA PL011 UART driver Dec 13 01:25:36.897899 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:25:36.897907 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:25:36.897914 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:25:36.897923 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:36.897930 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:36.897937 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:25:36.897944 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:25:36.897952 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:36.897959 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:36.897967 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:25:36.897974 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:25:36.897981 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:36.897989 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:36.897997 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:36.898004 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:36.898011 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:25:36.898019 kernel: ACPI: Interpreter enabled Dec 13 01:25:36.898026 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:25:36.898033 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:25:36.898040 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:25:36.898048 kernel: printk: console [ttyAMA0] enabled Dec 13 01:25:36.898056 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:25:36.898194 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:25:36.898282 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:25:36.898356 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:25:36.898419 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:25:36.898492 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:25:36.898502 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:25:36.898512 kernel: PCI host bridge to bus 0000:00 Dec 13 01:25:36.898581 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:25:36.898663 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:25:36.898722 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:25:36.898779 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:25:36.898859 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:25:36.898936 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:25:36.899010 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:25:36.899080 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:25:36.899147 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:25:36.899212 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:25:36.899279 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:25:36.899345 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:25:36.899404 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:25:36.899472 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:25:36.899532 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:25:36.899542 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:25:36.899549 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:25:36.899557 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:25:36.899564 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:25:36.899571 kernel: iommu: Default domain type: Translated Dec 13 01:25:36.899579 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:25:36.899589 kernel: efivars: Registered efivars operations Dec 13 01:25:36.899596 kernel: vgaarb: loaded Dec 13 01:25:36.899603 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:25:36.899620 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:36.899631 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:36.899639 kernel: pnp: PnP ACPI init Dec 13 01:25:36.899723 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:25:36.899734 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:25:36.899745 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:36.899752 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:36.899760 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:25:36.899767 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:36.899775 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:25:36.899782 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:25:36.899790 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:25:36.899798 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:36.899805 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:36.899814 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:36.899822 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:36.899829 kernel: kvm [1]: HYP mode not available Dec 13 01:25:36.899837 kernel: Initialise system trusted keyrings Dec 13 01:25:36.899845 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:25:36.899852 kernel: Key type asymmetric registered Dec 13 01:25:36.899860 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:36.899867 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:25:36.899875 kernel: io scheduler mq-deadline registered Dec 13 01:25:36.899883 kernel: io scheduler kyber registered Dec 13 01:25:36.899891 kernel: io scheduler bfq registered Dec 13 01:25:36.899898 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:25:36.899906 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:25:36.899914 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:25:36.899991 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:25:36.900001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:36.900009 kernel: thunder_xcv, ver 1.0 Dec 13 01:25:36.900016 kernel: thunder_bgx, ver 1.0 Dec 13 01:25:36.900026 kernel: nicpf, ver 1.0 Dec 13 01:25:36.900033 kernel: nicvf, ver 1.0 Dec 13 01:25:36.900170 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:25:36.900253 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:25:36 UTC (1734053136) Dec 13 01:25:36.900263 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:25:36.900271 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:25:36.900279 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:25:36.900286 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:25:36.900298 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:36.900305 kernel: Segment Routing with IPv6 Dec 13 01:25:36.900313 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:36.900320 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:36.900327 kernel: Key type dns_resolver registered Dec 13 01:25:36.900335 kernel: registered taskstats version 1 Dec 13 01:25:36.900342 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:36.900350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:25:36.900358 kernel: Key type .fscrypt registered Dec 13 01:25:36.900373 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:36.900381 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:25:36.900388 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:36.900396 kernel: ima: No architecture policies found Dec 13 01:25:36.900403 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:25:36.900410 kernel: clk: Disabling unused clocks Dec 13 01:25:36.900418 kernel: Freeing unused kernel memory: 39360K Dec 13 01:25:36.900426 kernel: Run /init as init process Dec 13 01:25:36.900433 kernel: with arguments: Dec 13 01:25:36.900447 kernel: /init Dec 13 01:25:36.900455 kernel: with environment: Dec 13 01:25:36.900476 kernel: HOME=/ Dec 13 01:25:36.900483 kernel: TERM=linux Dec 13 01:25:36.900491 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:36.900501 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:36.900510 systemd[1]: Detected virtualization kvm. Dec 13 01:25:36.900519 systemd[1]: Detected architecture arm64. Dec 13 01:25:36.900528 systemd[1]: Running in initrd. Dec 13 01:25:36.900536 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:36.900544 systemd[1]: Hostname set to . Dec 13 01:25:36.900552 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:25:36.900560 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:36.900568 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:36.900577 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:36.900585 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:36.900595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:36.900602 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:36.900636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:36.900645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:36.900654 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:36.900662 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:36.900673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:36.900681 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:36.900689 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:36.900696 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:36.900704 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:36.900712 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:36.900720 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:36.900728 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:36.900736 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:36.900745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:36.900753 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:36.900761 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:36.900769 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:36.900777 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:36.900785 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:36.900793 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:36.900801 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:36.900808 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:36.900818 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:36.900826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:36.900834 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:36.900841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:36.900849 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:36.900859 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:36.900867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:36.900875 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:36.900905 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 01:25:36.900926 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:36.900935 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:36.900943 systemd-journald[237]: Journal started Dec 13 01:25:36.900961 systemd-journald[237]: Runtime Journal (/run/log/journal/8ffb6083f85d4df28db5db01f12f9ce3) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:25:36.887685 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 01:25:36.905147 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:36.905167 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:36.906663 kernel: Bridge firewalling registered Dec 13 01:25:36.907052 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 01:25:36.908500 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:36.909578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:36.911310 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:36.914082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:36.920952 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:36.924223 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:36.925399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:36.927972 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:36.930036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:36.941338 dracut-cmdline[274]: dracut-dracut-053 Dec 13 01:25:36.943859 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:36.967256 systemd-resolved[275]: Positive Trust Anchors: Dec 13 01:25:36.967275 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:36.967308 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:36.978067 systemd-resolved[275]: Defaulting to hostname 'linux'. Dec 13 01:25:36.979130 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:36.980040 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:37.019643 kernel: SCSI subsystem initialized Dec 13 01:25:37.024624 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:37.031629 kernel: iscsi: registered transport (tcp) Dec 13 01:25:37.046667 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:37.046712 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:37.088930 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:37.099746 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:37.115923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:37.115966 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:37.117630 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:37.161628 kernel: raid6: neonx8 gen() 15739 MB/s Dec 13 01:25:37.178620 kernel: raid6: neonx4 gen() 15628 MB/s Dec 13 01:25:37.195619 kernel: raid6: neonx2 gen() 13220 MB/s Dec 13 01:25:37.212622 kernel: raid6: neonx1 gen() 10467 MB/s Dec 13 01:25:37.229623 kernel: raid6: int64x8 gen() 6950 MB/s Dec 13 01:25:37.246621 kernel: raid6: int64x4 gen() 7344 MB/s Dec 13 01:25:37.263630 kernel: raid6: int64x2 gen() 6128 MB/s Dec 13 01:25:37.280622 kernel: raid6: int64x1 gen() 5050 MB/s Dec 13 01:25:37.280636 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s Dec 13 01:25:37.297626 kernel: raid6: .... xor() 11914 MB/s, rmw enabled Dec 13 01:25:37.297641 kernel: raid6: using neon recovery algorithm Dec 13 01:25:37.302624 kernel: xor: measuring software checksum speed Dec 13 01:25:37.302639 kernel: 8regs : 19821 MB/sec Dec 13 01:25:37.304016 kernel: 32regs : 18515 MB/sec Dec 13 01:25:37.304030 kernel: arm64_neon : 27070 MB/sec Dec 13 01:25:37.304043 kernel: xor: using function: arm64_neon (27070 MB/sec) Dec 13 01:25:37.352640 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:37.363687 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:37.380801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:37.391931 systemd-udevd[458]: Using default interface naming scheme 'v255'. Dec 13 01:25:37.395092 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:37.404764 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:37.416848 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Dec 13 01:25:37.442639 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:37.456772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:37.497176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:37.505782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:37.516335 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:37.518023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:37.519345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:37.521195 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:37.527813 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:37.538704 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:37.550177 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:25:37.562043 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:25:37.562146 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:25:37.562158 kernel: GPT:9289727 != 19775487 Dec 13 01:25:37.562168 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:25:37.562178 kernel: GPT:9289727 != 19775487 Dec 13 01:25:37.562194 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:25:37.562204 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:25:37.561222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:37.561338 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:37.565712 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:37.567400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:37.567541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:37.569178 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:37.581729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:37.584657 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Dec 13 01:25:37.584708 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (509) Dec 13 01:25:37.592978 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:25:37.597110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:37.601709 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:25:37.608406 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:25:37.609691 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:25:37.615094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:25:37.628768 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:37.630638 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:37.636987 disk-uuid[548]: Primary Header is updated. Dec 13 01:25:37.636987 disk-uuid[548]: Secondary Entries is updated. Dec 13 01:25:37.636987 disk-uuid[548]: Secondary Header is updated. Dec 13 01:25:37.640641 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:25:37.656743 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:38.659637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:25:38.660492 disk-uuid[551]: The operation has completed successfully. Dec 13 01:25:38.687345 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:38.687454 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:38.710773 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:38.713757 sh[569]: Success Dec 13 01:25:38.728691 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:25:38.767032 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:38.768479 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:38.769216 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:38.780375 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:25:38.780424 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:38.780444 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:38.780458 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:38.780970 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:38.784695 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:38.785780 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:38.786505 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:38.788440 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:38.803917 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:38.803970 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:38.803983 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:25:38.808639 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:25:38.815062 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:38.816654 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:38.825131 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:38.830805 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:38.889999 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:38.903846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:38.936380 systemd-networkd[759]: lo: Link UP Dec 13 01:25:38.936393 systemd-networkd[759]: lo: Gained carrier Dec 13 01:25:38.937106 systemd-networkd[759]: Enumeration completed Dec 13 01:25:38.938425 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:38.938441 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:38.938638 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:38.939312 systemd-networkd[759]: eth0: Link UP Dec 13 01:25:38.939315 systemd-networkd[759]: eth0: Gained carrier Dec 13 01:25:38.939322 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:38.941530 systemd[1]: Reached target network.target - Network. Dec 13 01:25:38.949825 ignition[672]: Ignition 2.19.0 Dec 13 01:25:38.949832 ignition[672]: Stage: fetch-offline Dec 13 01:25:38.949866 ignition[672]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:38.949874 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:25:38.950031 ignition[672]: parsed url from cmdline: "" Dec 13 01:25:38.950034 ignition[672]: no config URL provided Dec 13 01:25:38.950038 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:38.950046 ignition[672]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:38.950073 ignition[672]: op(1): [started] loading QEMU firmware config module Dec 13 01:25:38.950078 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:25:38.959683 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:25:38.964613 ignition[672]: op(1): [finished] loading QEMU firmware config module Dec 13 01:25:39.003219 ignition[672]: parsing config with SHA512: a97eb342e1b68503b6014c814648b608a9f5783517b62bc27592426ae9596d32ab3ae99bd4b07a2d9470bf8ca22510a9e5953760aed3a0c75728b4d2f9a9318b Dec 13 01:25:39.007867 unknown[672]: fetched base config from "system" Dec 13 01:25:39.007877 unknown[672]: fetched user config from "qemu" Dec 13 01:25:39.008443 ignition[672]: fetch-offline: fetch-offline passed Dec 13 01:25:39.008510 ignition[672]: Ignition finished successfully Dec 13 01:25:39.010138 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:39.011637 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:25:39.026779 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:39.037067 ignition[766]: Ignition 2.19.0 Dec 13 01:25:39.037077 ignition[766]: Stage: kargs Dec 13 01:25:39.037229 ignition[766]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:39.037238 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:25:39.038157 ignition[766]: kargs: kargs passed Dec 13 01:25:39.038200 ignition[766]: Ignition finished successfully Dec 13 01:25:39.040216 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:39.048818 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:39.060196 ignition[775]: Ignition 2.19.0 Dec 13 01:25:39.060209 ignition[775]: Stage: disks Dec 13 01:25:39.060412 ignition[775]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:39.060422 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:25:39.063358 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:39.061666 ignition[775]: disks: disks passed Dec 13 01:25:39.064954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:39.061719 ignition[775]: Ignition finished successfully Dec 13 01:25:39.066140 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:39.067020 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:39.068108 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:39.069483 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:39.080734 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:39.094323 systemd-fsck[784]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:25:39.102782 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:39.109718 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:39.153638 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:39.154221 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:39.155242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:39.173696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:39.175255 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:39.176667 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:25:39.176705 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:39.176726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:39.185040 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (792) Dec 13 01:25:39.184198 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:39.186566 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:39.192259 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:39.192278 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:39.192289 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:25:39.197652 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:25:39.197349 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:39.243266 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:25:39.246939 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:25:39.251400 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:25:39.255355 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:25:39.334498 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:39.348718 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:25:39.351178 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:25:39.356620 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:39.373192 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:25:39.375654 ignition[905]: INFO : Ignition 2.19.0 Dec 13 01:25:39.375654 ignition[905]: INFO : Stage: mount Dec 13 01:25:39.376964 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:39.376964 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:25:39.376964 ignition[905]: INFO : mount: mount passed Dec 13 01:25:39.376964 ignition[905]: INFO : Ignition finished successfully Dec 13 01:25:39.378526 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:25:39.392726 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:25:39.778899 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:25:39.792454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:39.799356 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Dec 13 01:25:39.799401 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:39.799413 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:39.799626 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:25:39.804984 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:25:39.806175 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:39.825187 ignition[935]: INFO : Ignition 2.19.0 Dec 13 01:25:39.825187 ignition[935]: INFO : Stage: files Dec 13 01:25:39.825187 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:39.825187 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:25:39.825187 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:25:39.828962 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:25:39.828962 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:25:39.832496 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:25:39.834441 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:25:39.836603 unknown[935]: wrote ssh authorized keys file for user: core Dec 13 01:25:39.837545 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:25:39.840948 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:39.840948 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:39.840948 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:39.840948 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:25:39.907630 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:25:40.002863 systemd-networkd[759]: eth0: Gained IPv6LL Dec 13 01:25:40.025741 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:40.027760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:25:40.342275 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:25:40.836653 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:40.836653 ignition[935]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 01:25:40.840226 ignition[935]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:25:40.874563 ignition[935]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:25:40.878114 ignition[935]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:25:40.880367 ignition[935]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:25:40.880367 ignition[935]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:40.880367 ignition[935]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:40.880367 ignition[935]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:40.880367 ignition[935]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:40.880367 ignition[935]: INFO : files: files passed Dec 13 01:25:40.880367 ignition[935]: INFO : Ignition finished successfully Dec 13 01:25:40.880932 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:25:40.889755 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:25:40.892570 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:25:40.894762 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:25:40.894846 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:25:40.900544 initrd-setup-root-after-ignition[963]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:25:40.904204 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:40.904204 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:40.906497 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:40.908672 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:40.909707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:25:40.918755 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:25:40.938868 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:25:40.938979 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:25:40.940599 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:25:40.941909 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:25:40.943213 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:25:40.944076 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:25:40.959462 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:40.961737 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:25:40.972049 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:40.972966 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:40.974458 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:25:40.975738 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:25:40.975847 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:40.977693 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:25:40.979123 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:25:40.980396 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:25:40.981661 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:40.983166 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:40.984564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:25:40.985906 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:40.987331 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:25:40.988764 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:25:40.990026 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:25:40.991130 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:25:40.991233 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:40.992982 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:40.994443 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:40.995827 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:25:40.996700 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:41.001046 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:25:41.001710 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:41.004784 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:25:41.005012 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:41.008420 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:25:41.009626 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:25:41.014696 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:41.015669 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:25:41.017209 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:25:41.018315 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:25:41.018399 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:41.019500 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:25:41.019577 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:41.020881 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:25:41.020981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:41.022253 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:25:41.022340 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:25:41.037017 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:25:41.037689 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:25:41.037802 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:41.040597 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:25:41.041451 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:25:41.041577 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:41.043266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:25:41.043397 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:41.049544 ignition[989]: INFO : Ignition 2.19.0 Dec 13 01:25:41.049544 ignition[989]: INFO : Stage: umount Dec 13 01:25:41.049544 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:41.049544 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:25:41.055551 ignition[989]: INFO : umount: umount passed Dec 13 01:25:41.055551 ignition[989]: INFO : Ignition finished successfully Dec 13 01:25:41.050453 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:25:41.051373 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:25:41.052644 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:25:41.052728 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:25:41.055984 systemd[1]: Stopped target network.target - Network. Dec 13 01:25:41.056896 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:25:41.056962 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:25:41.058272 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:25:41.058316 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:25:41.059526 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:25:41.059569 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:25:41.061350 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:25:41.061395 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:41.062825 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:25:41.064170 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:25:41.066149 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:25:41.067661 systemd-networkd[759]: eth0: DHCPv6 lease lost Dec 13 01:25:41.070049 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:25:41.070159 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:25:41.071469 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:25:41.071502 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:41.079735 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:25:41.080405 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:25:41.080476 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:41.081546 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:41.083004 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:25:41.087530 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:25:41.091040 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:25:41.091100 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:41.092016 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:25:41.092064 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:41.093418 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:25:41.093469 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:41.095972 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:25:41.097641 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:25:41.101217 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:25:41.101361 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:41.102971 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:25:41.103007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:41.104720 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:25:41.104753 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:41.106061 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:25:41.106106 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:41.108088 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:25:41.108132 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:41.110122 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:41.110168 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:41.123859 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:25:41.124936 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:25:41.125001 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:41.126494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:41.126536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:41.128534 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:25:41.128650 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:25:41.129788 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:25:41.129876 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:25:41.133035 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:25:41.134668 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:25:41.134736 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:41.136891 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:25:41.148846 systemd[1]: Switching root. Dec 13 01:25:41.171536 systemd-journald[237]: Journal stopped Dec 13 01:25:41.883962 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 01:25:41.884015 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:25:41.884028 kernel: SELinux: policy capability open_perms=1 Dec 13 01:25:41.884037 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:25:41.884047 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:25:41.884060 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:25:41.884069 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:25:41.884078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:25:41.884091 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:25:41.884101 kernel: audit: type=1403 audit(1734053141.357:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:25:41.884111 systemd[1]: Successfully loaded SELinux policy in 30.959ms. Dec 13 01:25:41.884127 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.222ms. Dec 13 01:25:41.884141 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:41.884151 systemd[1]: Detected virtualization kvm. Dec 13 01:25:41.884163 systemd[1]: Detected architecture arm64. Dec 13 01:25:41.884177 systemd[1]: Detected first boot. Dec 13 01:25:41.884188 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:25:41.884198 zram_generator::config[1051]: No configuration found. Dec 13 01:25:41.884209 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:25:41.884219 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:25:41.884230 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:25:41.884241 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:25:41.884253 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:25:41.884263 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:25:41.884274 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:25:41.884285 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:25:41.884295 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:25:41.884306 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:25:41.884317 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:25:41.884327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:41.884337 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:41.884349 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:25:41.884359 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:25:41.884370 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:25:41.884381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:41.884392 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:25:41.884402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:41.884412 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:25:41.884429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:41.884442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:41.884454 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:41.884466 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:41.884477 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:25:41.884487 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:25:41.884497 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:41.884508 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:41.884518 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:41.884529 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:41.884540 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:41.884551 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:25:41.884561 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:25:41.884572 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:25:41.884582 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:25:41.884592 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:25:41.884603 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:25:41.887232 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:25:41.887246 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:25:41.887264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:41.887281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:41.887292 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:25:41.887303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:41.887313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:25:41.887324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:41.887334 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:25:41.887346 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:41.887358 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:25:41.887369 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:25:41.887381 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:25:41.887391 kernel: fuse: init (API version 7.39) Dec 13 01:25:41.887402 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:41.887413 kernel: loop: module loaded Dec 13 01:25:41.887433 kernel: ACPI: bus type drm_connector registered Dec 13 01:25:41.887445 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:41.887456 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:25:41.887469 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:25:41.887481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:41.887516 systemd-journald[1137]: Collecting audit messages is disabled. Dec 13 01:25:41.887544 systemd-journald[1137]: Journal started Dec 13 01:25:41.887565 systemd-journald[1137]: Runtime Journal (/run/log/journal/8ffb6083f85d4df28db5db01f12f9ce3) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:25:41.889149 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:41.890072 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:25:41.890962 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:25:41.891847 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:25:41.892705 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:25:41.893677 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:25:41.894548 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:25:41.895628 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:25:41.896784 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:41.897883 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:25:41.898043 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:25:41.899161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:41.899316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:41.900381 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:25:41.900546 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:25:41.901584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:41.901740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:41.902820 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:25:41.902966 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:25:41.904038 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:41.904250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:41.905587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:41.906718 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:25:41.908076 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:25:41.918334 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:25:41.928740 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:25:41.930468 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:25:41.931373 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:25:41.934747 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:25:41.938405 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:25:41.941320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:25:41.942340 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:25:41.943254 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:25:41.944742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:41.948572 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:41.949004 systemd-journald[1137]: Time spent on flushing to /var/log/journal/8ffb6083f85d4df28db5db01f12f9ce3 is 13.602ms for 844 entries. Dec 13 01:25:41.949004 systemd-journald[1137]: System Journal (/var/log/journal/8ffb6083f85d4df28db5db01f12f9ce3) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:25:41.976837 systemd-journald[1137]: Received client request to flush runtime journal. Dec 13 01:25:41.952485 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:41.957027 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:25:41.958101 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:25:41.959323 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:25:41.962387 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:25:41.964829 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:25:41.976864 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:25:41.978322 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:25:41.985790 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Dec 13 01:25:41.985802 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Dec 13 01:25:41.987913 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:41.989652 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:41.997759 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:25:42.015680 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:25:42.025740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:42.036785 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Dec 13 01:25:42.036804 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Dec 13 01:25:42.040364 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:42.368574 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:25:42.384829 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:42.404409 systemd-udevd[1213]: Using default interface naming scheme 'v255'. Dec 13 01:25:42.420201 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:42.427750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:42.451934 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1217) Dec 13 01:25:42.451797 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:25:42.456705 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Dec 13 01:25:42.473639 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1214) Dec 13 01:25:42.488627 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1214) Dec 13 01:25:42.504790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:25:42.506582 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:25:42.549815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:42.556856 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:25:42.559603 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:25:42.567933 systemd-networkd[1220]: lo: Link UP Dec 13 01:25:42.568200 systemd-networkd[1220]: lo: Gained carrier Dec 13 01:25:42.568989 systemd-networkd[1220]: Enumeration completed Dec 13 01:25:42.569187 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:42.569734 systemd-networkd[1220]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:42.569815 systemd-networkd[1220]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:42.570500 systemd-networkd[1220]: eth0: Link UP Dec 13 01:25:42.570575 systemd-networkd[1220]: eth0: Gained carrier Dec 13 01:25:42.570642 systemd-networkd[1220]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:42.571183 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:25:42.576874 lvm[1250]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:25:42.589680 systemd-networkd[1220]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:25:42.603902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:42.614963 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:25:42.616041 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:42.629796 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:25:42.632919 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:25:42.659872 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:25:42.660943 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:42.661873 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:25:42.661903 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:42.662635 systemd[1]: Reached target machines.target - Containers. Dec 13 01:25:42.664246 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:25:42.676729 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:25:42.678583 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:25:42.679395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:42.680770 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:25:42.682662 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:25:42.686498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:25:42.694660 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:25:42.696644 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 01:25:42.697711 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:25:42.705860 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:25:42.706514 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:25:42.714709 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:25:42.751887 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:25:42.809759 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 01:25:42.864641 kernel: loop3: detected capacity change from 0 to 194512 Dec 13 01:25:42.892648 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:25:42.896641 kernel: loop5: detected capacity change from 0 to 114432 Dec 13 01:25:42.906143 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:25:42.906526 (sd-merge)[1279]: Merged extensions into '/usr'. Dec 13 01:25:42.910639 systemd[1]: Reloading requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:25:42.910654 systemd[1]: Reloading... Dec 13 01:25:42.949687 zram_generator::config[1307]: No configuration found. Dec 13 01:25:43.014885 ldconfig[1263]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:25:43.042917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:25:43.085292 systemd[1]: Reloading finished in 174 ms. Dec 13 01:25:43.100297 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:25:43.101480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:25:43.118790 systemd[1]: Starting ensure-sysext.service... Dec 13 01:25:43.123596 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:43.126645 systemd[1]: Reloading requested from client PID 1348 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:25:43.126660 systemd[1]: Reloading... Dec 13 01:25:43.140196 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:25:43.140461 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:25:43.141260 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:25:43.141488 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Dec 13 01:25:43.141539 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Dec 13 01:25:43.143803 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:25:43.143817 systemd-tmpfiles[1349]: Skipping /boot Dec 13 01:25:43.150491 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:25:43.150506 systemd-tmpfiles[1349]: Skipping /boot Dec 13 01:25:43.167964 zram_generator::config[1375]: No configuration found. Dec 13 01:25:43.262055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:25:43.304675 systemd[1]: Reloading finished in 177 ms. Dec 13 01:25:43.320362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:43.339388 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:25:43.341695 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:25:43.343591 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:25:43.348890 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:43.350628 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:25:43.358138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:43.360764 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:43.366939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:43.371766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:43.374378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:43.375431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:43.375600 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:43.380078 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:25:43.389868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:43.390005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:43.394468 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:43.394705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:43.399017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:43.400382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:43.402843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:43.408879 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:43.410786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:43.412855 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:25:43.415453 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:25:43.416897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:43.417046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:43.418397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:43.418562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:43.419974 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:43.420553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:43.427087 augenrules[1461]: No rules Dec 13 01:25:43.431161 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:25:43.432560 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:25:43.434777 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:25:43.438026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:43.438589 systemd-resolved[1424]: Positive Trust Anchors: Dec 13 01:25:43.440365 systemd-resolved[1424]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:43.440399 systemd-resolved[1424]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:43.442752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:43.444409 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:25:43.446250 systemd-resolved[1424]: Defaulting to hostname 'linux'. Dec 13 01:25:43.447778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:43.449772 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:43.450548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:43.450727 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:25:43.451040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:43.452220 systemd[1]: Finished ensure-sysext.service. Dec 13 01:25:43.453181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:43.453315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:43.454409 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:25:43.454581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:25:43.455770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:43.455906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:43.457029 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:43.457257 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:43.462396 systemd[1]: Reached target network.target - Network. Dec 13 01:25:43.463100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:43.463951 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:25:43.464013 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:25:43.465824 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:25:43.510732 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:25:43.511383 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:25:43.511445 systemd-timesyncd[1492]: Initial clock synchronization to Fri 2024-12-13 01:25:43.729395 UTC. Dec 13 01:25:43.511997 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:43.512854 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:25:43.513756 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:25:43.514645 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:25:43.515509 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:25:43.515540 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:43.516215 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:25:43.517098 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:25:43.517975 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:25:43.518864 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:43.520223 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:25:43.522293 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:25:43.524105 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:25:43.530656 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:25:43.531450 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:43.532198 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:43.533037 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:25:43.533083 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:25:43.533102 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:25:43.534155 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:25:43.535943 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:25:43.537572 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:25:43.541775 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:25:43.542584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:25:43.543688 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:25:43.547144 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:25:43.552067 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:25:43.554501 jq[1498]: false Dec 13 01:25:43.556022 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:25:43.564806 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:25:43.570864 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:25:43.573909 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:25:43.578175 extend-filesystems[1500]: Found loop3 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found loop4 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found loop5 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found vda Dec 13 01:25:43.578175 extend-filesystems[1500]: Found vda1 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found vda2 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found vda3 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found usr Dec 13 01:25:43.578175 extend-filesystems[1500]: Found vda4 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found vda6 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found vda7 Dec 13 01:25:43.578175 extend-filesystems[1500]: Found vda9 Dec 13 01:25:43.578175 extend-filesystems[1500]: Checking size of /dev/vda9 Dec 13 01:25:43.583112 dbus-daemon[1497]: [system] SELinux support is enabled Dec 13 01:25:43.603320 extend-filesystems[1500]: Resized partition /dev/vda9 Dec 13 01:25:43.578723 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:25:43.587002 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:25:43.604228 jq[1520]: true Dec 13 01:25:43.591770 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:25:43.592019 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:25:43.592263 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:25:43.592460 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:25:43.597211 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:25:43.597426 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:25:43.609839 (ntainerd)[1528]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:25:43.618628 jq[1527]: true Dec 13 01:25:43.618747 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:25:43.618776 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:25:43.620800 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:25:43.620831 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:25:43.625579 extend-filesystems[1532]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:25:43.632637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1231) Dec 13 01:25:43.632689 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:25:43.634898 update_engine[1518]: I20241213 01:25:43.634239 1518 main.cc:92] Flatcar Update Engine starting Dec 13 01:25:43.643813 update_engine[1518]: I20241213 01:25:43.641117 1518 update_check_scheduler.cc:74] Next update check in 3m9s Dec 13 01:25:43.645587 tar[1526]: linux-arm64/helm Dec 13 01:25:43.648655 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:25:43.652185 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:25:43.657790 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:25:43.658373 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:25:43.659019 systemd-logind[1510]: New seat seat0. Dec 13 01:25:43.660488 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:25:43.669195 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:25:43.685116 extend-filesystems[1532]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:25:43.685116 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:25:43.685116 extend-filesystems[1532]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:25:43.685342 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:25:43.689019 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Dec 13 01:25:43.685593 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:25:43.701391 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:25:43.702736 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:25:43.704285 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:25:43.713800 systemd-networkd[1220]: eth0: Gained IPv6LL Dec 13 01:25:43.717894 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:25:43.719257 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:25:43.726858 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:25:43.729098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:25:43.733847 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:25:43.760462 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:25:43.760736 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:25:43.761951 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:25:43.769384 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:25:43.794698 locksmithd[1545]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:25:43.896103 containerd[1528]: time="2024-12-13T01:25:43.896015440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:25:43.930431 containerd[1528]: time="2024-12-13T01:25:43.930331840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:43.932601 containerd[1528]: time="2024-12-13T01:25:43.932557720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:43.932601 containerd[1528]: time="2024-12-13T01:25:43.932596800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:25:43.932687 containerd[1528]: time="2024-12-13T01:25:43.932625800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:25:43.932863 containerd[1528]: time="2024-12-13T01:25:43.932836280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:25:43.932891 containerd[1528]: time="2024-12-13T01:25:43.932864360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:43.933001 containerd[1528]: time="2024-12-13T01:25:43.932978920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:43.933027 containerd[1528]: time="2024-12-13T01:25:43.933001360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:43.933334 containerd[1528]: time="2024-12-13T01:25:43.933303840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:43.933334 containerd[1528]: time="2024-12-13T01:25:43.933330880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:43.933384 containerd[1528]: time="2024-12-13T01:25:43.933345360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:43.933384 containerd[1528]: time="2024-12-13T01:25:43.933356080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:43.933527 containerd[1528]: time="2024-12-13T01:25:43.933502320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:43.934476 containerd[1528]: time="2024-12-13T01:25:43.933990600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:43.934476 containerd[1528]: time="2024-12-13T01:25:43.934268040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:43.934476 containerd[1528]: time="2024-12-13T01:25:43.934287280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:25:43.934476 containerd[1528]: time="2024-12-13T01:25:43.934372280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:25:43.934581 containerd[1528]: time="2024-12-13T01:25:43.934483080Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:25:43.938582 containerd[1528]: time="2024-12-13T01:25:43.938545000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:25:43.938672 containerd[1528]: time="2024-12-13T01:25:43.938602800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:25:43.938672 containerd[1528]: time="2024-12-13T01:25:43.938652560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:25:43.939060 containerd[1528]: time="2024-12-13T01:25:43.938669240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:25:43.939060 containerd[1528]: time="2024-12-13T01:25:43.938766120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:25:43.939060 containerd[1528]: time="2024-12-13T01:25:43.938946160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:25:43.939497 containerd[1528]: time="2024-12-13T01:25:43.939469320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:25:43.939636 containerd[1528]: time="2024-12-13T01:25:43.939595960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:25:43.939667 containerd[1528]: time="2024-12-13T01:25:43.939638520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:25:43.939698 containerd[1528]: time="2024-12-13T01:25:43.939687680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:25:43.939719 containerd[1528]: time="2024-12-13T01:25:43.939707520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:25:43.939748 containerd[1528]: time="2024-12-13T01:25:43.939723120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:25:43.939748 containerd[1528]: time="2024-12-13T01:25:43.939735520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:25:43.939788 containerd[1528]: time="2024-12-13T01:25:43.939748720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:25:43.939788 containerd[1528]: time="2024-12-13T01:25:43.939763120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:25:43.939788 containerd[1528]: time="2024-12-13T01:25:43.939775320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:25:43.939788 containerd[1528]: time="2024-12-13T01:25:43.939786880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:25:43.939860 containerd[1528]: time="2024-12-13T01:25:43.939798800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:25:43.939860 containerd[1528]: time="2024-12-13T01:25:43.939819440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.939860 containerd[1528]: time="2024-12-13T01:25:43.939833960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.939860 containerd[1528]: time="2024-12-13T01:25:43.939845880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.939860 containerd[1528]: time="2024-12-13T01:25:43.939858000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.939946 containerd[1528]: time="2024-12-13T01:25:43.939875920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.939946 containerd[1528]: time="2024-12-13T01:25:43.939889400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.939946 containerd[1528]: time="2024-12-13T01:25:43.939901320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.939946 containerd[1528]: time="2024-12-13T01:25:43.939919400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.939946 containerd[1528]: time="2024-12-13T01:25:43.939932320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.940026 containerd[1528]: time="2024-12-13T01:25:43.939955400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.940026 containerd[1528]: time="2024-12-13T01:25:43.939970600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.940026 containerd[1528]: time="2024-12-13T01:25:43.939984560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.940026 containerd[1528]: time="2024-12-13T01:25:43.939997120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.940026 containerd[1528]: time="2024-12-13T01:25:43.940011880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:25:43.940109 containerd[1528]: time="2024-12-13T01:25:43.940031440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.940109 containerd[1528]: time="2024-12-13T01:25:43.940043880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.940109 containerd[1528]: time="2024-12-13T01:25:43.940058800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:25:43.940452 containerd[1528]: time="2024-12-13T01:25:43.940168240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:25:43.940452 containerd[1528]: time="2024-12-13T01:25:43.940188120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:25:43.940452 containerd[1528]: time="2024-12-13T01:25:43.940200000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:25:43.940452 containerd[1528]: time="2024-12-13T01:25:43.940211320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:25:43.940452 containerd[1528]: time="2024-12-13T01:25:43.940220600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.940452 containerd[1528]: time="2024-12-13T01:25:43.940232320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:25:43.940452 containerd[1528]: time="2024-12-13T01:25:43.940242000Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:25:43.940452 containerd[1528]: time="2024-12-13T01:25:43.940255760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:25:43.941498 containerd[1528]: time="2024-12-13T01:25:43.940570640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:25:43.941498 containerd[1528]: time="2024-12-13T01:25:43.940677520Z" level=info msg="Connect containerd service" Dec 13 01:25:43.941498 containerd[1528]: time="2024-12-13T01:25:43.940704760Z" level=info msg="using legacy CRI server" Dec 13 01:25:43.941498 containerd[1528]: time="2024-12-13T01:25:43.940711480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:25:43.941498 containerd[1528]: time="2024-12-13T01:25:43.940784760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:25:43.941762 containerd[1528]: time="2024-12-13T01:25:43.941730800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:25:43.942475 containerd[1528]: time="2024-12-13T01:25:43.941914760Z" level=info msg="Start subscribing containerd event" Dec 13 01:25:43.942475 containerd[1528]: time="2024-12-13T01:25:43.941966440Z" level=info msg="Start recovering state" Dec 13 01:25:43.942475 containerd[1528]: time="2024-12-13T01:25:43.942029360Z" level=info msg="Start event monitor" Dec 13 01:25:43.942475 containerd[1528]: time="2024-12-13T01:25:43.942040560Z" level=info msg="Start snapshots syncer" Dec 13 01:25:43.942475 containerd[1528]: time="2024-12-13T01:25:43.942049280Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:25:43.942475 containerd[1528]: time="2024-12-13T01:25:43.942057040Z" level=info msg="Start streaming server" Dec 13 01:25:43.947805 containerd[1528]: time="2024-12-13T01:25:43.946254440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:25:43.947805 containerd[1528]: time="2024-12-13T01:25:43.946337840Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:25:43.946527 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:25:43.948746 containerd[1528]: time="2024-12-13T01:25:43.948296520Z" level=info msg="containerd successfully booted in 0.055964s" Dec 13 01:25:44.065658 tar[1526]: linux-arm64/LICENSE Dec 13 01:25:44.066521 tar[1526]: linux-arm64/README.md Dec 13 01:25:44.085996 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:25:44.218805 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:25:44.246235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:25:44.251686 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:25:44.255075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:25:44.258807 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:25:44.262162 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:25:44.262413 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:25:44.272865 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:25:44.280911 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:25:44.283366 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:25:44.285286 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:25:44.286480 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:25:44.287471 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:25:44.288442 systemd[1]: Startup finished in 5.192s (kernel) + 2.965s (userspace) = 8.157s. Dec 13 01:25:44.732147 kubelet[1626]: E1213 01:25:44.732024 1626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:25:44.735081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:25:44.735269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:25:49.740335 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:25:49.747941 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:43792.service - OpenSSH per-connection server daemon (10.0.0.1:43792). Dec 13 01:25:49.794194 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 43792 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:25:49.795849 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:49.814923 systemd-logind[1510]: New session 1 of user core. Dec 13 01:25:49.815749 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:25:49.821809 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:25:49.830584 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:25:49.832649 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:25:49.838324 (systemd)[1655]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:25:49.916308 systemd[1655]: Queued start job for default target default.target. Dec 13 01:25:49.916677 systemd[1655]: Created slice app.slice - User Application Slice. Dec 13 01:25:49.916701 systemd[1655]: Reached target paths.target - Paths. Dec 13 01:25:49.916712 systemd[1655]: Reached target timers.target - Timers. Dec 13 01:25:49.925764 systemd[1655]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:25:49.931969 systemd[1655]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:25:49.932027 systemd[1655]: Reached target sockets.target - Sockets. Dec 13 01:25:49.932039 systemd[1655]: Reached target basic.target - Basic System. Dec 13 01:25:49.932090 systemd[1655]: Reached target default.target - Main User Target. Dec 13 01:25:49.932117 systemd[1655]: Startup finished in 89ms. Dec 13 01:25:49.932227 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:25:49.933488 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:25:49.994943 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:43808.service - OpenSSH per-connection server daemon (10.0.0.1:43808). Dec 13 01:25:50.030302 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 43808 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:25:50.031489 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:50.035674 systemd-logind[1510]: New session 2 of user core. Dec 13 01:25:50.053915 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:25:50.106812 sshd[1667]: pam_unix(sshd:session): session closed for user core Dec 13 01:25:50.116843 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:43812.service - OpenSSH per-connection server daemon (10.0.0.1:43812). Dec 13 01:25:50.117202 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:43808.service: Deactivated successfully. Dec 13 01:25:50.119043 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:25:50.119560 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:25:50.121017 systemd-logind[1510]: Removed session 2. Dec 13 01:25:50.151107 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 43812 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:25:50.152208 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:50.156089 systemd-logind[1510]: New session 3 of user core. Dec 13 01:25:50.169836 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:25:50.217745 sshd[1672]: pam_unix(sshd:session): session closed for user core Dec 13 01:25:50.225873 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:43824.service - OpenSSH per-connection server daemon (10.0.0.1:43824). Dec 13 01:25:50.226454 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:43812.service: Deactivated successfully. Dec 13 01:25:50.227885 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:25:50.228451 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:25:50.230223 systemd-logind[1510]: Removed session 3. Dec 13 01:25:50.259820 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 43824 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:25:50.260901 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:50.265650 systemd-logind[1510]: New session 4 of user core. Dec 13 01:25:50.270846 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:25:50.321745 sshd[1680]: pam_unix(sshd:session): session closed for user core Dec 13 01:25:50.332925 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:43834.service - OpenSSH per-connection server daemon (10.0.0.1:43834). Dec 13 01:25:50.333495 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:43824.service: Deactivated successfully. Dec 13 01:25:50.335171 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:25:50.335533 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:25:50.336610 systemd-logind[1510]: Removed session 4. Dec 13 01:25:50.366846 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 43834 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:25:50.367967 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:50.371700 systemd-logind[1510]: New session 5 of user core. Dec 13 01:25:50.383985 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:25:50.452043 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:25:50.452311 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:25:50.467369 sudo[1695]: pam_unix(sudo:session): session closed for user root Dec 13 01:25:50.469902 sshd[1688]: pam_unix(sshd:session): session closed for user core Dec 13 01:25:50.477840 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:43844.service - OpenSSH per-connection server daemon (10.0.0.1:43844). Dec 13 01:25:50.478216 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:43834.service: Deactivated successfully. Dec 13 01:25:50.479903 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:25:50.480419 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:25:50.481883 systemd-logind[1510]: Removed session 5. Dec 13 01:25:50.512447 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 43844 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:25:50.513920 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:50.517909 systemd-logind[1510]: New session 6 of user core. Dec 13 01:25:50.525860 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:25:50.577024 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:25:50.577282 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:25:50.580586 sudo[1705]: pam_unix(sudo:session): session closed for user root Dec 13 01:25:50.584953 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:25:50.585216 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:25:50.605859 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:25:50.607323 auditctl[1708]: No rules Dec 13 01:25:50.607689 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:25:50.607916 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:25:50.610139 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:25:50.632933 augenrules[1727]: No rules Dec 13 01:25:50.634201 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:25:50.635544 sudo[1704]: pam_unix(sudo:session): session closed for user root Dec 13 01:25:50.637114 sshd[1697]: pam_unix(sshd:session): session closed for user core Dec 13 01:25:50.649885 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:43860.service - OpenSSH per-connection server daemon (10.0.0.1:43860). Dec 13 01:25:50.650269 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:43844.service: Deactivated successfully. Dec 13 01:25:50.652010 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:25:50.652515 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:25:50.653940 systemd-logind[1510]: Removed session 6. Dec 13 01:25:50.684073 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 43860 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:25:50.685330 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:50.689470 systemd-logind[1510]: New session 7 of user core. Dec 13 01:25:50.699867 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:25:50.752132 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:25:50.752780 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:25:51.055858 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:25:51.056003 (dockerd)[1759]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:25:51.332468 dockerd[1759]: time="2024-12-13T01:25:51.332343793Z" level=info msg="Starting up" Dec 13 01:25:51.400135 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2959803822-merged.mount: Deactivated successfully. Dec 13 01:25:51.573115 dockerd[1759]: time="2024-12-13T01:25:51.572901265Z" level=info msg="Loading containers: start." Dec 13 01:25:51.655651 kernel: Initializing XFRM netlink socket Dec 13 01:25:51.714531 systemd-networkd[1220]: docker0: Link UP Dec 13 01:25:51.738934 dockerd[1759]: time="2024-12-13T01:25:51.738804788Z" level=info msg="Loading containers: done." Dec 13 01:25:51.753453 dockerd[1759]: time="2024-12-13T01:25:51.753378961Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:25:51.753570 dockerd[1759]: time="2024-12-13T01:25:51.753520460Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:25:51.753657 dockerd[1759]: time="2024-12-13T01:25:51.753639925Z" level=info msg="Daemon has completed initialization" Dec 13 01:25:51.782890 dockerd[1759]: time="2024-12-13T01:25:51.782842161Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:25:51.783009 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:25:52.398105 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1841751752-merged.mount: Deactivated successfully. Dec 13 01:25:52.485156 containerd[1528]: time="2024-12-13T01:25:52.485083039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:25:53.161150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345853718.mount: Deactivated successfully. Dec 13 01:25:54.750475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:25:54.764852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:25:54.872293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:25:54.876404 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:25:54.923792 kubelet[1978]: E1213 01:25:54.923658 1978 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:25:54.928161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:25:54.928387 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:25:55.321696 containerd[1528]: time="2024-12-13T01:25:55.321588120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:55.322558 containerd[1528]: time="2024-12-13T01:25:55.322313967Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Dec 13 01:25:55.323449 containerd[1528]: time="2024-12-13T01:25:55.323408835Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:55.326500 containerd[1528]: time="2024-12-13T01:25:55.326462559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:55.327977 containerd[1528]: time="2024-12-13T01:25:55.327925562Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.842799237s" Dec 13 01:25:55.327977 containerd[1528]: time="2024-12-13T01:25:55.327967544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:25:55.347166 containerd[1528]: time="2024-12-13T01:25:55.347118093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:25:57.225673 containerd[1528]: time="2024-12-13T01:25:57.225624318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:57.226714 containerd[1528]: time="2024-12-13T01:25:57.226682377Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Dec 13 01:25:57.228014 containerd[1528]: time="2024-12-13T01:25:57.227953334Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:57.231394 containerd[1528]: time="2024-12-13T01:25:57.231340209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:57.232386 containerd[1528]: time="2024-12-13T01:25:57.232142646Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.884986569s" Dec 13 01:25:57.232386 containerd[1528]: time="2024-12-13T01:25:57.232177734Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:25:57.250969 containerd[1528]: time="2024-12-13T01:25:57.250909502Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:25:58.695666 containerd[1528]: time="2024-12-13T01:25:58.695568162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:58.696336 containerd[1528]: time="2024-12-13T01:25:58.696300508Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Dec 13 01:25:58.696931 containerd[1528]: time="2024-12-13T01:25:58.696902910Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:58.700439 containerd[1528]: time="2024-12-13T01:25:58.700367896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:25:58.701523 containerd[1528]: time="2024-12-13T01:25:58.701477019Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.45053143s" Dec 13 01:25:58.701523 containerd[1528]: time="2024-12-13T01:25:58.701521284Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:25:58.721036 containerd[1528]: time="2024-12-13T01:25:58.720980589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:25:59.757693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703567503.mount: Deactivated successfully. Dec 13 01:26:00.074720 containerd[1528]: time="2024-12-13T01:26:00.074258885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:00.075159 containerd[1528]: time="2024-12-13T01:26:00.074863064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Dec 13 01:26:00.075794 containerd[1528]: time="2024-12-13T01:26:00.075732493Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:00.077748 containerd[1528]: time="2024-12-13T01:26:00.077703537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:00.078393 containerd[1528]: time="2024-12-13T01:26:00.078352218Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.357327656s" Dec 13 01:26:00.078393 containerd[1528]: time="2024-12-13T01:26:00.078388896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:26:00.097552 containerd[1528]: time="2024-12-13T01:26:00.097499410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:26:00.715216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023417764.mount: Deactivated successfully. Dec 13 01:26:01.607642 containerd[1528]: time="2024-12-13T01:26:01.607574350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:01.608620 containerd[1528]: time="2024-12-13T01:26:01.608528951Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 01:26:01.609488 containerd[1528]: time="2024-12-13T01:26:01.609433612Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:01.612574 containerd[1528]: time="2024-12-13T01:26:01.612543947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:01.616527 containerd[1528]: time="2024-12-13T01:26:01.616206193Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.518661s" Dec 13 01:26:01.616527 containerd[1528]: time="2024-12-13T01:26:01.616256735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:26:01.635992 containerd[1528]: time="2024-12-13T01:26:01.635926817Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:26:02.088513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989138499.mount: Deactivated successfully. Dec 13 01:26:02.093512 containerd[1528]: time="2024-12-13T01:26:02.093456302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:02.094537 containerd[1528]: time="2024-12-13T01:26:02.094499747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 01:26:02.095632 containerd[1528]: time="2024-12-13T01:26:02.095552334Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:02.099466 containerd[1528]: time="2024-12-13T01:26:02.099116215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:02.099466 containerd[1528]: time="2024-12-13T01:26:02.099372485Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 463.374431ms" Dec 13 01:26:02.099466 containerd[1528]: time="2024-12-13T01:26:02.099430428Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:26:02.120378 containerd[1528]: time="2024-12-13T01:26:02.120135687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:26:02.741012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578539666.mount: Deactivated successfully. Dec 13 01:26:05.000669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:26:05.009820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:05.094827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:05.099745 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:05.141497 kubelet[2145]: E1213 01:26:05.141391 2145 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:05.144301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:05.144722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:05.777459 containerd[1528]: time="2024-12-13T01:26:05.777407204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:05.778533 containerd[1528]: time="2024-12-13T01:26:05.778496239Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Dec 13 01:26:05.779298 containerd[1528]: time="2024-12-13T01:26:05.779254569Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:05.782794 containerd[1528]: time="2024-12-13T01:26:05.782720842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:05.784503 containerd[1528]: time="2024-12-13T01:26:05.784410988Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.664226235s" Dec 13 01:26:05.784600 containerd[1528]: time="2024-12-13T01:26:05.784493284Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:26:11.105664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:11.118173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:11.133136 systemd[1]: Reloading requested from client PID 2242 ('systemctl') (unit session-7.scope)... Dec 13 01:26:11.133153 systemd[1]: Reloading... Dec 13 01:26:11.191637 zram_generator::config[2282]: No configuration found. Dec 13 01:26:11.326627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:11.376263 systemd[1]: Reloading finished in 242 ms. Dec 13 01:26:11.413067 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:26:11.413131 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:26:11.413386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:11.415068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:11.513347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:11.517767 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:26:11.557621 kubelet[2338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:26:11.557621 kubelet[2338]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:26:11.557621 kubelet[2338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:26:11.558498 kubelet[2338]: I1213 01:26:11.558429 2338 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:26:12.058881 kubelet[2338]: I1213 01:26:12.058837 2338 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:26:12.058881 kubelet[2338]: I1213 01:26:12.058867 2338 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:26:12.059090 kubelet[2338]: I1213 01:26:12.059063 2338 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:26:12.108758 kubelet[2338]: E1213 01:26:12.108727 2338 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.108837 kubelet[2338]: I1213 01:26:12.108793 2338 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:26:12.119050 kubelet[2338]: I1213 01:26:12.119022 2338 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:26:12.119803 kubelet[2338]: I1213 01:26:12.119773 2338 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:26:12.119979 kubelet[2338]: I1213 01:26:12.119953 2338 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:26:12.119979 kubelet[2338]: I1213 01:26:12.119977 2338 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:26:12.120072 kubelet[2338]: I1213 01:26:12.119986 2338 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:26:12.121061 kubelet[2338]: I1213 01:26:12.121029 2338 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:26:12.123163 kubelet[2338]: I1213 01:26:12.123092 2338 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:26:12.123163 kubelet[2338]: I1213 01:26:12.123148 2338 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:26:12.123163 kubelet[2338]: I1213 01:26:12.123171 2338 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:26:12.123292 kubelet[2338]: I1213 01:26:12.123186 2338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:26:12.123569 kubelet[2338]: W1213 01:26:12.123527 2338 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.123640 kubelet[2338]: E1213 01:26:12.123581 2338 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.126304 kubelet[2338]: W1213 01:26:12.126263 2338 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.126410 kubelet[2338]: E1213 01:26:12.126392 2338 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.128419 kubelet[2338]: I1213 01:26:12.128385 2338 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:26:12.128901 kubelet[2338]: I1213 01:26:12.128873 2338 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:26:12.128996 kubelet[2338]: W1213 01:26:12.128983 2338 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:26:12.129820 kubelet[2338]: I1213 01:26:12.129713 2338 server.go:1256] "Started kubelet" Dec 13 01:26:12.130226 kubelet[2338]: I1213 01:26:12.130211 2338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:26:12.130576 kubelet[2338]: I1213 01:26:12.130346 2338 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:26:12.130576 kubelet[2338]: I1213 01:26:12.130450 2338 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:26:12.131359 kubelet[2338]: I1213 01:26:12.131330 2338 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:26:12.131703 kubelet[2338]: I1213 01:26:12.131677 2338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:26:12.133471 kubelet[2338]: E1213 01:26:12.133237 2338 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:26:12.133471 kubelet[2338]: I1213 01:26:12.133275 2338 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:26:12.133471 kubelet[2338]: I1213 01:26:12.133423 2338 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:26:12.133471 kubelet[2338]: I1213 01:26:12.133473 2338 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:26:12.133909 kubelet[2338]: W1213 01:26:12.133737 2338 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.133909 kubelet[2338]: E1213 01:26:12.133782 2338 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.134098 kubelet[2338]: E1213 01:26:12.134072 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" Dec 13 01:26:12.134523 kubelet[2338]: I1213 01:26:12.134495 2338 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:26:12.134601 kubelet[2338]: I1213 01:26:12.134581 2338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:26:12.135467 kubelet[2338]: E1213 01:26:12.135446 2338 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:26:12.135536 kubelet[2338]: I1213 01:26:12.135522 2338 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:26:12.136129 kubelet[2338]: E1213 01:26:12.136091 2338 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810982c326de59e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:26:12.129686942 +0000 UTC m=+0.608379850,LastTimestamp:2024-12-13 01:26:12.129686942 +0000 UTC m=+0.608379850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:26:12.146139 kubelet[2338]: I1213 01:26:12.146104 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:26:12.147312 kubelet[2338]: I1213 01:26:12.147291 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:26:12.147379 kubelet[2338]: I1213 01:26:12.147317 2338 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:26:12.147379 kubelet[2338]: I1213 01:26:12.147334 2338 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:26:12.147436 kubelet[2338]: E1213 01:26:12.147385 2338 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:26:12.149215 kubelet[2338]: W1213 01:26:12.148784 2338 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.149215 kubelet[2338]: E1213 01:26:12.148837 2338 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.155539 kubelet[2338]: I1213 01:26:12.155517 2338 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:26:12.155539 kubelet[2338]: I1213 01:26:12.155537 2338 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:26:12.155688 kubelet[2338]: I1213 01:26:12.155562 2338 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:26:12.216057 kubelet[2338]: I1213 01:26:12.216030 2338 policy_none.go:49] "None policy: Start" Dec 13 01:26:12.216900 kubelet[2338]: I1213 01:26:12.216861 2338 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:26:12.216900 kubelet[2338]: I1213 01:26:12.216906 2338 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:26:12.221873 kubelet[2338]: I1213 01:26:12.221247 2338 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:26:12.223297 kubelet[2338]: I1213 01:26:12.223038 2338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:26:12.223930 kubelet[2338]: E1213 01:26:12.223915 2338 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:26:12.234984 kubelet[2338]: I1213 01:26:12.234954 2338 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:26:12.235361 kubelet[2338]: E1213 01:26:12.235327 2338 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Dec 13 01:26:12.248492 kubelet[2338]: I1213 01:26:12.248451 2338 topology_manager.go:215] "Topology Admit Handler" podUID="06d4416d49fc1d62388b5d7c615ad213" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:26:12.249462 kubelet[2338]: I1213 01:26:12.249367 2338 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:26:12.250259 kubelet[2338]: I1213 01:26:12.250232 2338 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:26:12.335085 kubelet[2338]: I1213 01:26:12.335011 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06d4416d49fc1d62388b5d7c615ad213-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06d4416d49fc1d62388b5d7c615ad213\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:26:12.335085 kubelet[2338]: I1213 01:26:12.335047 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06d4416d49fc1d62388b5d7c615ad213-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06d4416d49fc1d62388b5d7c615ad213\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:26:12.335085 kubelet[2338]: I1213 01:26:12.335072 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:12.335198 kubelet[2338]: I1213 01:26:12.335103 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:12.335198 kubelet[2338]: I1213 01:26:12.335124 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:26:12.335198 kubelet[2338]: I1213 01:26:12.335141 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06d4416d49fc1d62388b5d7c615ad213-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06d4416d49fc1d62388b5d7c615ad213\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:26:12.335198 kubelet[2338]: I1213 01:26:12.335161 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:12.335198 kubelet[2338]: I1213 01:26:12.335184 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:12.335294 kubelet[2338]: I1213 01:26:12.335204 2338 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:12.335294 kubelet[2338]: E1213 01:26:12.335205 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" Dec 13 01:26:12.436966 kubelet[2338]: I1213 01:26:12.436938 2338 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:26:12.437532 kubelet[2338]: E1213 01:26:12.437510 2338 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Dec 13 01:26:12.554144 kubelet[2338]: E1213 01:26:12.554089 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:12.554426 kubelet[2338]: E1213 01:26:12.554399 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:12.554839 kubelet[2338]: E1213 01:26:12.554723 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:12.554901 containerd[1528]: time="2024-12-13T01:26:12.554731563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06d4416d49fc1d62388b5d7c615ad213,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:12.555167 containerd[1528]: time="2024-12-13T01:26:12.555006021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:12.555247 containerd[1528]: time="2024-12-13T01:26:12.554766025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:12.736800 kubelet[2338]: E1213 01:26:12.736685 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" Dec 13 01:26:12.839145 kubelet[2338]: I1213 01:26:12.839114 2338 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:26:12.839470 kubelet[2338]: E1213 01:26:12.839452 2338 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Dec 13 01:26:12.973408 kubelet[2338]: W1213 01:26:12.973367 2338 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:12.973408 kubelet[2338]: E1213 01:26:12.973415 2338 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:13.018872 kubelet[2338]: W1213 01:26:13.018811 2338 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:13.018960 kubelet[2338]: E1213 01:26:13.018876 2338 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:13.054992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount244740058.mount: Deactivated successfully. Dec 13 01:26:13.059780 containerd[1528]: time="2024-12-13T01:26:13.059723197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:13.060874 containerd[1528]: time="2024-12-13T01:26:13.060837188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:26:13.062424 containerd[1528]: time="2024-12-13T01:26:13.062386146Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:13.064136 containerd[1528]: time="2024-12-13T01:26:13.064099558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:26:13.065095 containerd[1528]: time="2024-12-13T01:26:13.065052258Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:13.065877 containerd[1528]: time="2024-12-13T01:26:13.065844627Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:13.066774 containerd[1528]: time="2024-12-13T01:26:13.066736293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:26:13.068083 containerd[1528]: time="2024-12-13T01:26:13.068038191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:13.068710 containerd[1528]: time="2024-12-13T01:26:13.068678674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.567986ms" Dec 13 01:26:13.076241 containerd[1528]: time="2024-12-13T01:26:13.073286606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 518.462264ms" Dec 13 01:26:13.077146 containerd[1528]: time="2024-12-13T01:26:13.076955286Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.72208ms" Dec 13 01:26:13.186084 containerd[1528]: time="2024-12-13T01:26:13.185972693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:13.186084 containerd[1528]: time="2024-12-13T01:26:13.186033928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:13.186084 containerd[1528]: time="2024-12-13T01:26:13.186049777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:13.186266 containerd[1528]: time="2024-12-13T01:26:13.186136226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:13.187451 containerd[1528]: time="2024-12-13T01:26:13.187310412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:13.187690 containerd[1528]: time="2024-12-13T01:26:13.187373047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:13.187690 containerd[1528]: time="2024-12-13T01:26:13.187620467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:13.188979 containerd[1528]: time="2024-12-13T01:26:13.188738701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:13.188979 containerd[1528]: time="2024-12-13T01:26:13.188805379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:13.188979 containerd[1528]: time="2024-12-13T01:26:13.188817346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:13.188979 containerd[1528]: time="2024-12-13T01:26:13.188907997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:13.190032 containerd[1528]: time="2024-12-13T01:26:13.189972881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:13.239803 containerd[1528]: time="2024-12-13T01:26:13.239761348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06d4416d49fc1d62388b5d7c615ad213,Namespace:kube-system,Attempt:0,} returns sandbox id \"1defe8e108c2a8d4b4fd66d503cec17921f006fe97ebd4214a2525e2d4ae799f\"" Dec 13 01:26:13.241632 kubelet[2338]: E1213 01:26:13.241520 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:13.245077 containerd[1528]: time="2024-12-13T01:26:13.244994996Z" level=info msg="CreateContainer within sandbox \"1defe8e108c2a8d4b4fd66d503cec17921f006fe97ebd4214a2525e2d4ae799f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:26:13.246195 containerd[1528]: time="2024-12-13T01:26:13.246144327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"919fc1c6580e1f8ece160af8b1260a50925fbbf3ac3f391f0b4944b7118322c0\"" Dec 13 01:26:13.246454 containerd[1528]: time="2024-12-13T01:26:13.246263795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba84c0dc090a72d47808e65190933ec14eddb78c599dd9715b3f6644cd301e18\"" Dec 13 01:26:13.246879 kubelet[2338]: E1213 01:26:13.246848 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:13.247049 kubelet[2338]: E1213 01:26:13.246848 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:13.248558 containerd[1528]: time="2024-12-13T01:26:13.248526758Z" level=info msg="CreateContainer within sandbox \"919fc1c6580e1f8ece160af8b1260a50925fbbf3ac3f391f0b4944b7118322c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:26:13.249086 containerd[1528]: time="2024-12-13T01:26:13.249064343Z" level=info msg="CreateContainer within sandbox \"ba84c0dc090a72d47808e65190933ec14eddb78c599dd9715b3f6644cd301e18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:26:13.264439 containerd[1528]: time="2024-12-13T01:26:13.264391673Z" level=info msg="CreateContainer within sandbox \"ba84c0dc090a72d47808e65190933ec14eddb78c599dd9715b3f6644cd301e18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe4aa0289c649ec8f94184e298cd074164a6ce4e940c02db866a835a389cc737\"" Dec 13 01:26:13.265292 containerd[1528]: time="2024-12-13T01:26:13.265158988Z" level=info msg="CreateContainer within sandbox \"1defe8e108c2a8d4b4fd66d503cec17921f006fe97ebd4214a2525e2d4ae799f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"81de21c67ba38c4b7236de300aca7cb05a2f8c82f49b4e060200e23cd12d780f\"" Dec 13 01:26:13.265842 containerd[1528]: time="2024-12-13T01:26:13.265588071Z" level=info msg="StartContainer for \"81de21c67ba38c4b7236de300aca7cb05a2f8c82f49b4e060200e23cd12d780f\"" Dec 13 01:26:13.265842 containerd[1528]: time="2024-12-13T01:26:13.265664034Z" level=info msg="StartContainer for \"fe4aa0289c649ec8f94184e298cd074164a6ce4e940c02db866a835a389cc737\"" Dec 13 01:26:13.269896 containerd[1528]: time="2024-12-13T01:26:13.269792294Z" level=info msg="CreateContainer within sandbox \"919fc1c6580e1f8ece160af8b1260a50925fbbf3ac3f391f0b4944b7118322c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8842f8afc89154f8a7eb4343aed943da78c5acf7d1b9b0165c12e1a09759bcba\"" Dec 13 01:26:13.271183 containerd[1528]: time="2024-12-13T01:26:13.271152706Z" level=info msg="StartContainer for \"8842f8afc89154f8a7eb4343aed943da78c5acf7d1b9b0165c12e1a09759bcba\"" Dec 13 01:26:13.301086 kubelet[2338]: W1213 01:26:13.301030 2338 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:13.301086 kubelet[2338]: E1213 01:26:13.301087 2338 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Dec 13 01:26:13.339177 containerd[1528]: time="2024-12-13T01:26:13.339052881Z" level=info msg="StartContainer for \"fe4aa0289c649ec8f94184e298cd074164a6ce4e940c02db866a835a389cc737\" returns successfully" Dec 13 01:26:13.339506 containerd[1528]: time="2024-12-13T01:26:13.339409404Z" level=info msg="StartContainer for \"8842f8afc89154f8a7eb4343aed943da78c5acf7d1b9b0165c12e1a09759bcba\" returns successfully" Dec 13 01:26:13.380098 containerd[1528]: time="2024-12-13T01:26:13.376177649Z" level=info msg="StartContainer for \"81de21c67ba38c4b7236de300aca7cb05a2f8c82f49b4e060200e23cd12d780f\" returns successfully" Dec 13 01:26:13.642913 kubelet[2338]: I1213 01:26:13.642773 2338 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:26:14.156135 kubelet[2338]: E1213 01:26:14.156075 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:14.158005 kubelet[2338]: E1213 01:26:14.157977 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:14.160295 kubelet[2338]: E1213 01:26:14.160264 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:14.934212 kubelet[2338]: E1213 01:26:14.934177 2338 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:26:15.020020 kubelet[2338]: I1213 01:26:15.019977 2338 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:26:15.033259 kubelet[2338]: E1213 01:26:15.033041 2338 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:26:15.134143 kubelet[2338]: E1213 01:26:15.134100 2338 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:26:15.163523 kubelet[2338]: E1213 01:26:15.163422 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:15.234580 kubelet[2338]: E1213 01:26:15.234473 2338 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:26:16.125732 kubelet[2338]: I1213 01:26:16.125463 2338 apiserver.go:52] "Watching apiserver" Dec 13 01:26:16.134246 kubelet[2338]: I1213 01:26:16.133842 2338 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:26:16.178739 kubelet[2338]: E1213 01:26:16.177889 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:17.164642 kubelet[2338]: E1213 01:26:17.164547 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:17.629379 systemd[1]: Reloading requested from client PID 2624 ('systemctl') (unit session-7.scope)... Dec 13 01:26:17.629394 systemd[1]: Reloading... Dec 13 01:26:17.682642 zram_generator::config[2666]: No configuration found. Dec 13 01:26:17.767182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:17.822161 systemd[1]: Reloading finished in 192 ms. Dec 13 01:26:17.851990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:17.864543 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:26:17.864867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:17.873921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:17.956175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:17.960901 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:26:18.000405 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:26:18.000405 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:26:18.000405 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:26:18.000868 kubelet[2715]: I1213 01:26:18.000447 2715 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:26:18.005899 kubelet[2715]: I1213 01:26:18.005872 2715 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:26:18.005992 kubelet[2715]: I1213 01:26:18.005940 2715 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:26:18.006117 kubelet[2715]: I1213 01:26:18.006103 2715 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:26:18.008193 kubelet[2715]: I1213 01:26:18.008163 2715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:26:18.011343 kubelet[2715]: I1213 01:26:18.011252 2715 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:26:18.018165 kubelet[2715]: I1213 01:26:18.018145 2715 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:26:18.018536 kubelet[2715]: I1213 01:26:18.018524 2715 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:26:18.018710 kubelet[2715]: I1213 01:26:18.018692 2715 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:26:18.018778 kubelet[2715]: I1213 01:26:18.018719 2715 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:26:18.018778 kubelet[2715]: I1213 01:26:18.018728 2715 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:26:18.018778 kubelet[2715]: I1213 01:26:18.018755 2715 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:26:18.018849 kubelet[2715]: I1213 01:26:18.018837 2715 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:26:18.018872 kubelet[2715]: I1213 01:26:18.018850 2715 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:26:18.018872 kubelet[2715]: I1213 01:26:18.018869 2715 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:26:18.018913 kubelet[2715]: I1213 01:26:18.018883 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:26:18.020676 kubelet[2715]: I1213 01:26:18.019740 2715 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:26:18.020676 kubelet[2715]: I1213 01:26:18.019907 2715 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:26:18.020676 kubelet[2715]: I1213 01:26:18.020253 2715 server.go:1256] "Started kubelet" Dec 13 01:26:18.020676 kubelet[2715]: I1213 01:26:18.020624 2715 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:26:18.021028 kubelet[2715]: I1213 01:26:18.021007 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:26:18.021325 kubelet[2715]: I1213 01:26:18.021307 2715 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:26:18.021422 kubelet[2715]: I1213 01:26:18.021343 2715 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:26:18.023150 kubelet[2715]: E1213 01:26:18.023130 2715 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:26:18.023673 kubelet[2715]: I1213 01:26:18.023646 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:26:18.023734 kubelet[2715]: I1213 01:26:18.023717 2715 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:26:18.024243 kubelet[2715]: E1213 01:26:18.024227 2715 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:26:18.024583 kubelet[2715]: I1213 01:26:18.024567 2715 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:26:18.024802 kubelet[2715]: I1213 01:26:18.024775 2715 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:26:18.025637 kubelet[2715]: I1213 01:26:18.025581 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:26:18.027850 kubelet[2715]: I1213 01:26:18.027828 2715 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:26:18.027850 kubelet[2715]: I1213 01:26:18.027846 2715 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:26:18.050550 kubelet[2715]: I1213 01:26:18.050522 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:26:18.054633 kubelet[2715]: I1213 01:26:18.053127 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:26:18.054633 kubelet[2715]: I1213 01:26:18.053151 2715 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:26:18.054633 kubelet[2715]: I1213 01:26:18.053170 2715 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:26:18.054633 kubelet[2715]: E1213 01:26:18.053211 2715 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:26:18.095870 kubelet[2715]: I1213 01:26:18.095845 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:26:18.096025 kubelet[2715]: I1213 01:26:18.096013 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:26:18.096093 kubelet[2715]: I1213 01:26:18.096085 2715 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:26:18.096352 kubelet[2715]: I1213 01:26:18.096340 2715 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:26:18.096452 kubelet[2715]: I1213 01:26:18.096441 2715 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:26:18.096530 kubelet[2715]: I1213 01:26:18.096520 2715 policy_none.go:49] "None policy: Start" Dec 13 01:26:18.097259 kubelet[2715]: I1213 01:26:18.097213 2715 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:26:18.097328 kubelet[2715]: I1213 01:26:18.097269 2715 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:26:18.097461 kubelet[2715]: I1213 01:26:18.097442 2715 state_mem.go:75] "Updated machine memory state" Dec 13 01:26:18.098590 kubelet[2715]: I1213 01:26:18.098567 2715 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:26:18.098807 kubelet[2715]: I1213 01:26:18.098788 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:26:18.128570 kubelet[2715]: I1213 01:26:18.128539 2715 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:26:18.142603 kubelet[2715]: I1213 01:26:18.142570 2715 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:26:18.142725 kubelet[2715]: I1213 01:26:18.142671 2715 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:26:18.154228 kubelet[2715]: I1213 01:26:18.154197 2715 topology_manager.go:215] "Topology Admit Handler" podUID="06d4416d49fc1d62388b5d7c615ad213" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:26:18.154333 kubelet[2715]: I1213 01:26:18.154276 2715 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:26:18.154362 kubelet[2715]: I1213 01:26:18.154339 2715 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:26:18.159866 kubelet[2715]: E1213 01:26:18.159836 2715 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:26:18.225527 kubelet[2715]: I1213 01:26:18.225423 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06d4416d49fc1d62388b5d7c615ad213-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06d4416d49fc1d62388b5d7c615ad213\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:26:18.225527 kubelet[2715]: I1213 01:26:18.225476 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06d4416d49fc1d62388b5d7c615ad213-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06d4416d49fc1d62388b5d7c615ad213\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:26:18.225702 kubelet[2715]: I1213 01:26:18.225543 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:18.225702 kubelet[2715]: I1213 01:26:18.225630 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:18.225702 kubelet[2715]: I1213 01:26:18.225659 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:18.225779 kubelet[2715]: I1213 01:26:18.225715 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:26:18.225779 kubelet[2715]: I1213 01:26:18.225762 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06d4416d49fc1d62388b5d7c615ad213-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06d4416d49fc1d62388b5d7c615ad213\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:26:18.225819 kubelet[2715]: I1213 01:26:18.225787 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:18.225819 kubelet[2715]: I1213 01:26:18.225806 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:26:18.460678 kubelet[2715]: E1213 01:26:18.460570 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:18.460678 kubelet[2715]: E1213 01:26:18.460573 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:18.460678 kubelet[2715]: E1213 01:26:18.460669 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:19.019968 kubelet[2715]: I1213 01:26:19.019918 2715 apiserver.go:52] "Watching apiserver" Dec 13 01:26:19.025297 kubelet[2715]: I1213 01:26:19.025258 2715 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:26:19.073626 kubelet[2715]: E1213 01:26:19.073138 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:19.073626 kubelet[2715]: E1213 01:26:19.073302 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:19.074066 kubelet[2715]: E1213 01:26:19.074046 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:19.101665 kubelet[2715]: I1213 01:26:19.101239 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.1010838569999999 podStartE2EDuration="1.101083857s" podCreationTimestamp="2024-12-13 01:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:19.101085137 +0000 UTC m=+1.136900166" watchObservedRunningTime="2024-12-13 01:26:19.101083857 +0000 UTC m=+1.136898886" Dec 13 01:26:19.101665 kubelet[2715]: I1213 01:26:19.101351 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.101333987 podStartE2EDuration="1.101333987s" podCreationTimestamp="2024-12-13 01:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:19.092549747 +0000 UTC m=+1.128364776" watchObservedRunningTime="2024-12-13 01:26:19.101333987 +0000 UTC m=+1.137149016" Dec 13 01:26:19.132835 kubelet[2715]: I1213 01:26:19.132791 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.132743527 podStartE2EDuration="3.132743527s" podCreationTimestamp="2024-12-13 01:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:19.110484839 +0000 UTC m=+1.146299868" watchObservedRunningTime="2024-12-13 01:26:19.132743527 +0000 UTC m=+1.168558556" Dec 13 01:26:20.074906 kubelet[2715]: E1213 01:26:20.074830 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:20.259928 kubelet[2715]: E1213 01:26:20.259859 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:22.129866 sudo[1740]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:22.132359 sshd[1733]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:22.135801 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:43860.service: Deactivated successfully. Dec 13 01:26:22.137672 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:26:22.138674 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:26:22.139431 systemd-logind[1510]: Removed session 7. Dec 13 01:26:23.669951 kubelet[2715]: E1213 01:26:23.669851 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:24.081853 kubelet[2715]: E1213 01:26:24.081819 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:25.082676 kubelet[2715]: E1213 01:26:25.082501 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:25.190772 kubelet[2715]: E1213 01:26:25.190724 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:26.084834 kubelet[2715]: E1213 01:26:26.084806 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:29.132259 update_engine[1518]: I20241213 01:26:29.132189 1518 update_attempter.cc:509] Updating boot flags... Dec 13 01:26:29.166537 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2809) Dec 13 01:26:29.200632 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2813) Dec 13 01:26:29.228753 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2813) Dec 13 01:26:30.266848 kubelet[2715]: E1213 01:26:30.266764 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:33.798340 kubelet[2715]: I1213 01:26:33.798310 2715 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:26:33.808467 containerd[1528]: time="2024-12-13T01:26:33.808406010Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:26:33.808836 kubelet[2715]: I1213 01:26:33.808798 2715 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:26:34.247918 kubelet[2715]: I1213 01:26:34.247649 2715 topology_manager.go:215] "Topology Admit Handler" podUID="268fd733-c48a-4b4c-b4cf-cac3fe15da43" podNamespace="kube-system" podName="kube-proxy-8rjh6" Dec 13 01:26:34.343583 kubelet[2715]: I1213 01:26:34.343535 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2hz\" (UniqueName: \"kubernetes.io/projected/268fd733-c48a-4b4c-b4cf-cac3fe15da43-kube-api-access-cc2hz\") pod \"kube-proxy-8rjh6\" (UID: \"268fd733-c48a-4b4c-b4cf-cac3fe15da43\") " pod="kube-system/kube-proxy-8rjh6" Dec 13 01:26:34.343583 kubelet[2715]: I1213 01:26:34.343591 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/268fd733-c48a-4b4c-b4cf-cac3fe15da43-xtables-lock\") pod \"kube-proxy-8rjh6\" (UID: \"268fd733-c48a-4b4c-b4cf-cac3fe15da43\") " pod="kube-system/kube-proxy-8rjh6" Dec 13 01:26:34.343744 kubelet[2715]: I1213 01:26:34.343638 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/268fd733-c48a-4b4c-b4cf-cac3fe15da43-lib-modules\") pod \"kube-proxy-8rjh6\" (UID: \"268fd733-c48a-4b4c-b4cf-cac3fe15da43\") " pod="kube-system/kube-proxy-8rjh6" Dec 13 01:26:34.343744 kubelet[2715]: I1213 01:26:34.343660 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/268fd733-c48a-4b4c-b4cf-cac3fe15da43-kube-proxy\") pod \"kube-proxy-8rjh6\" (UID: \"268fd733-c48a-4b4c-b4cf-cac3fe15da43\") " pod="kube-system/kube-proxy-8rjh6" Dec 13 01:26:34.360320 kubelet[2715]: I1213 01:26:34.360275 2715 topology_manager.go:215] "Topology Admit Handler" podUID="055a3b1e-73bf-47cb-bc8b-c9182e7c9622" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-tgkcl" Dec 13 01:26:34.444312 kubelet[2715]: I1213 01:26:34.444252 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2lhh\" (UniqueName: \"kubernetes.io/projected/055a3b1e-73bf-47cb-bc8b-c9182e7c9622-kube-api-access-z2lhh\") pod \"tigera-operator-c7ccbd65-tgkcl\" (UID: \"055a3b1e-73bf-47cb-bc8b-c9182e7c9622\") " pod="tigera-operator/tigera-operator-c7ccbd65-tgkcl" Dec 13 01:26:34.444458 kubelet[2715]: I1213 01:26:34.444355 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/055a3b1e-73bf-47cb-bc8b-c9182e7c9622-var-lib-calico\") pod \"tigera-operator-c7ccbd65-tgkcl\" (UID: \"055a3b1e-73bf-47cb-bc8b-c9182e7c9622\") " pod="tigera-operator/tigera-operator-c7ccbd65-tgkcl" Dec 13 01:26:34.551205 kubelet[2715]: E1213 01:26:34.551166 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:34.552113 containerd[1528]: time="2024-12-13T01:26:34.551782023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8rjh6,Uid:268fd733-c48a-4b4c-b4cf-cac3fe15da43,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:34.577983 containerd[1528]: time="2024-12-13T01:26:34.577898640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:34.577983 containerd[1528]: time="2024-12-13T01:26:34.577947648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:34.577983 containerd[1528]: time="2024-12-13T01:26:34.577959410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:34.578121 containerd[1528]: time="2024-12-13T01:26:34.578032182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:34.621628 containerd[1528]: time="2024-12-13T01:26:34.621565092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8rjh6,Uid:268fd733-c48a-4b4c-b4cf-cac3fe15da43,Namespace:kube-system,Attempt:0,} returns sandbox id \"322109b00cc07aa8d04a28b745cd1c586268d0d8904ebd89f61e42502880c76e\"" Dec 13 01:26:34.625325 kubelet[2715]: E1213 01:26:34.625305 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:34.631432 containerd[1528]: time="2024-12-13T01:26:34.631392284Z" level=info msg="CreateContainer within sandbox \"322109b00cc07aa8d04a28b745cd1c586268d0d8904ebd89f61e42502880c76e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:26:34.646348 containerd[1528]: time="2024-12-13T01:26:34.646287797Z" level=info msg="CreateContainer within sandbox \"322109b00cc07aa8d04a28b745cd1c586268d0d8904ebd89f61e42502880c76e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26b4a97c7274bb87584b7a79f87c4a94458c427a9da7adb510d12e7f1e78fe3e\"" Dec 13 01:26:34.648317 containerd[1528]: time="2024-12-13T01:26:34.646920782Z" level=info msg="StartContainer for \"26b4a97c7274bb87584b7a79f87c4a94458c427a9da7adb510d12e7f1e78fe3e\"" Dec 13 01:26:34.669748 containerd[1528]: time="2024-12-13T01:26:34.669598788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-tgkcl,Uid:055a3b1e-73bf-47cb-bc8b-c9182e7c9622,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:26:34.697906 containerd[1528]: time="2024-12-13T01:26:34.697720178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:34.697906 containerd[1528]: time="2024-12-13T01:26:34.697777708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:34.697906 containerd[1528]: time="2024-12-13T01:26:34.697793551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:34.698037 containerd[1528]: time="2024-12-13T01:26:34.697882725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:34.705038 containerd[1528]: time="2024-12-13T01:26:34.704997187Z" level=info msg="StartContainer for \"26b4a97c7274bb87584b7a79f87c4a94458c427a9da7adb510d12e7f1e78fe3e\" returns successfully" Dec 13 01:26:34.740340 containerd[1528]: time="2024-12-13T01:26:34.740282807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-tgkcl,Uid:055a3b1e-73bf-47cb-bc8b-c9182e7c9622,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ac42f7b5cfc5e913b56a11deed6b643aac7aae5812bd0afbb5c1715b5a43b26\"" Dec 13 01:26:34.745482 containerd[1528]: time="2024-12-13T01:26:34.745454225Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:26:35.098879 kubelet[2715]: E1213 01:26:35.098845 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:35.109942 kubelet[2715]: I1213 01:26:35.109897 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8rjh6" podStartSLOduration=1.109861011 podStartE2EDuration="1.109861011s" podCreationTimestamp="2024-12-13 01:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:35.109755114 +0000 UTC m=+17.145570143" watchObservedRunningTime="2024-12-13 01:26:35.109861011 +0000 UTC m=+17.145676040" Dec 13 01:26:35.462651 systemd[1]: run-containerd-runc-k8s.io-322109b00cc07aa8d04a28b745cd1c586268d0d8904ebd89f61e42502880c76e-runc.hIhCUm.mount: Deactivated successfully. Dec 13 01:26:36.637951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68572812.mount: Deactivated successfully. Dec 13 01:26:36.861008 containerd[1528]: time="2024-12-13T01:26:36.860921121Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:36.861592 containerd[1528]: time="2024-12-13T01:26:36.861558138Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125968" Dec 13 01:26:36.862134 containerd[1528]: time="2024-12-13T01:26:36.862075576Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:36.864172 containerd[1528]: time="2024-12-13T01:26:36.864137969Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:36.865768 containerd[1528]: time="2024-12-13T01:26:36.865729290Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.120240299s" Dec 13 01:26:36.865768 containerd[1528]: time="2024-12-13T01:26:36.865766896Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:26:36.874842 containerd[1528]: time="2024-12-13T01:26:36.874804947Z" level=info msg="CreateContainer within sandbox \"5ac42f7b5cfc5e913b56a11deed6b643aac7aae5812bd0afbb5c1715b5a43b26\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:26:36.883682 containerd[1528]: time="2024-12-13T01:26:36.883623525Z" level=info msg="CreateContainer within sandbox \"5ac42f7b5cfc5e913b56a11deed6b643aac7aae5812bd0afbb5c1715b5a43b26\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"373aba8f1ec22be95c9a20f06f7bfffa5d9ec5a554c47f192dd2f021e3a96775\"" Dec 13 01:26:36.884381 containerd[1528]: time="2024-12-13T01:26:36.884168448Z" level=info msg="StartContainer for \"373aba8f1ec22be95c9a20f06f7bfffa5d9ec5a554c47f192dd2f021e3a96775\"" Dec 13 01:26:36.924141 containerd[1528]: time="2024-12-13T01:26:36.923820743Z" level=info msg="StartContainer for \"373aba8f1ec22be95c9a20f06f7bfffa5d9ec5a554c47f192dd2f021e3a96775\" returns successfully" Dec 13 01:26:41.692855 kubelet[2715]: I1213 01:26:41.692409 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-tgkcl" podStartSLOduration=5.565353203 podStartE2EDuration="7.6916394s" podCreationTimestamp="2024-12-13 01:26:34 +0000 UTC" firstStartedPulling="2024-12-13 01:26:34.741260769 +0000 UTC m=+16.777075798" lastFinishedPulling="2024-12-13 01:26:36.867546966 +0000 UTC m=+18.903361995" observedRunningTime="2024-12-13 01:26:37.169400379 +0000 UTC m=+19.205215408" watchObservedRunningTime="2024-12-13 01:26:41.6916394 +0000 UTC m=+23.727454429" Dec 13 01:26:41.694163 kubelet[2715]: I1213 01:26:41.693733 2715 topology_manager.go:215] "Topology Admit Handler" podUID="32165b1d-366e-44c2-82dd-fc8f4b126e2d" podNamespace="calico-system" podName="calico-typha-7fbd5c88f4-fjfks" Dec 13 01:26:41.697736 kubelet[2715]: I1213 01:26:41.695818 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzc2z\" (UniqueName: \"kubernetes.io/projected/32165b1d-366e-44c2-82dd-fc8f4b126e2d-kube-api-access-xzc2z\") pod \"calico-typha-7fbd5c88f4-fjfks\" (UID: \"32165b1d-366e-44c2-82dd-fc8f4b126e2d\") " pod="calico-system/calico-typha-7fbd5c88f4-fjfks" Dec 13 01:26:41.697736 kubelet[2715]: I1213 01:26:41.695874 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/32165b1d-366e-44c2-82dd-fc8f4b126e2d-typha-certs\") pod \"calico-typha-7fbd5c88f4-fjfks\" (UID: \"32165b1d-366e-44c2-82dd-fc8f4b126e2d\") " pod="calico-system/calico-typha-7fbd5c88f4-fjfks" Dec 13 01:26:41.697736 kubelet[2715]: I1213 01:26:41.695899 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32165b1d-366e-44c2-82dd-fc8f4b126e2d-tigera-ca-bundle\") pod \"calico-typha-7fbd5c88f4-fjfks\" (UID: \"32165b1d-366e-44c2-82dd-fc8f4b126e2d\") " pod="calico-system/calico-typha-7fbd5c88f4-fjfks" Dec 13 01:26:41.884733 kubelet[2715]: I1213 01:26:41.883159 2715 topology_manager.go:215] "Topology Admit Handler" podUID="16a5324f-0b1a-4895-a52d-e1247851006f" podNamespace="calico-system" podName="calico-node-46588" Dec 13 01:26:41.997379 kubelet[2715]: I1213 01:26:41.997210 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-cni-log-dir\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.997379 kubelet[2715]: I1213 01:26:41.997272 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-flexvol-driver-host\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.997379 kubelet[2715]: I1213 01:26:41.997296 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fnw\" (UniqueName: \"kubernetes.io/projected/16a5324f-0b1a-4895-a52d-e1247851006f-kube-api-access-z5fnw\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.997379 kubelet[2715]: I1213 01:26:41.997317 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-xtables-lock\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.997379 kubelet[2715]: I1213 01:26:41.997352 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-policysync\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.998059 kubelet[2715]: I1213 01:26:41.997373 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-lib-modules\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.998059 kubelet[2715]: I1213 01:26:41.997393 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/16a5324f-0b1a-4895-a52d-e1247851006f-node-certs\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.998059 kubelet[2715]: I1213 01:26:41.997413 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-var-run-calico\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.998059 kubelet[2715]: I1213 01:26:41.997432 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-cni-bin-dir\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.998059 kubelet[2715]: I1213 01:26:41.997459 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16a5324f-0b1a-4895-a52d-e1247851006f-tigera-ca-bundle\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.998169 kubelet[2715]: I1213 01:26:41.997482 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-var-lib-calico\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:41.998169 kubelet[2715]: I1213 01:26:41.997500 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/16a5324f-0b1a-4895-a52d-e1247851006f-cni-net-dir\") pod \"calico-node-46588\" (UID: \"16a5324f-0b1a-4895-a52d-e1247851006f\") " pod="calico-system/calico-node-46588" Dec 13 01:26:42.002580 kubelet[2715]: E1213 01:26:42.002550 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:42.003180 containerd[1528]: time="2024-12-13T01:26:42.003142643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fbd5c88f4-fjfks,Uid:32165b1d-366e-44c2-82dd-fc8f4b126e2d,Namespace:calico-system,Attempt:0,}" Dec 13 01:26:42.027906 containerd[1528]: time="2024-12-13T01:26:42.027694666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:42.027906 containerd[1528]: time="2024-12-13T01:26:42.027768554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:42.027906 containerd[1528]: time="2024-12-13T01:26:42.027783956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:42.027906 containerd[1528]: time="2024-12-13T01:26:42.027886008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:42.067705 containerd[1528]: time="2024-12-13T01:26:42.067667311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fbd5c88f4-fjfks,Uid:32165b1d-366e-44c2-82dd-fc8f4b126e2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee363de6fc11a298bf9d5e91311283983c6f7cf349b0d8819ca5950775123ca2\"" Dec 13 01:26:42.068253 kubelet[2715]: E1213 01:26:42.068229 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:42.069909 containerd[1528]: time="2024-12-13T01:26:42.069868611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:26:42.087546 kubelet[2715]: I1213 01:26:42.086645 2715 topology_manager.go:215] "Topology Admit Handler" podUID="f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7" podNamespace="calico-system" podName="csi-node-driver-mmdpd" Dec 13 01:26:42.087546 kubelet[2715]: E1213 01:26:42.086899 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmdpd" podUID="f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7" Dec 13 01:26:42.097904 kubelet[2715]: I1213 01:26:42.097873 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7-kubelet-dir\") pod \"csi-node-driver-mmdpd\" (UID: \"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7\") " pod="calico-system/csi-node-driver-mmdpd" Dec 13 01:26:42.098125 kubelet[2715]: I1213 01:26:42.098111 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7-socket-dir\") pod \"csi-node-driver-mmdpd\" (UID: \"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7\") " pod="calico-system/csi-node-driver-mmdpd" Dec 13 01:26:42.098272 kubelet[2715]: I1213 01:26:42.098260 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7-varrun\") pod \"csi-node-driver-mmdpd\" (UID: \"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7\") " pod="calico-system/csi-node-driver-mmdpd" Dec 13 01:26:42.098349 kubelet[2715]: I1213 01:26:42.098340 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7-registration-dir\") pod \"csi-node-driver-mmdpd\" (UID: \"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7\") " pod="calico-system/csi-node-driver-mmdpd" Dec 13 01:26:42.098468 kubelet[2715]: I1213 01:26:42.098458 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zdzf\" (UniqueName: \"kubernetes.io/projected/f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7-kube-api-access-9zdzf\") pod \"csi-node-driver-mmdpd\" (UID: \"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7\") " pod="calico-system/csi-node-driver-mmdpd" Dec 13 01:26:42.099923 kubelet[2715]: E1213 01:26:42.099761 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.099923 kubelet[2715]: W1213 01:26:42.099781 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.100148 kubelet[2715]: E1213 01:26:42.100131 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.100148 kubelet[2715]: W1213 01:26:42.100147 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.100578 kubelet[2715]: E1213 01:26:42.100559 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.100683 kubelet[2715]: E1213 01:26:42.100563 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.100801 kubelet[2715]: E1213 01:26:42.100789 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.100834 kubelet[2715]: W1213 01:26:42.100801 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.100834 kubelet[2715]: E1213 01:26:42.100821 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.101022 kubelet[2715]: E1213 01:26:42.100982 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.101022 kubelet[2715]: W1213 01:26:42.100989 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.101022 kubelet[2715]: E1213 01:26:42.101005 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.104798 kubelet[2715]: E1213 01:26:42.104769 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.104798 kubelet[2715]: W1213 01:26:42.104787 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.104798 kubelet[2715]: E1213 01:26:42.104808 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.105024 kubelet[2715]: E1213 01:26:42.104999 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.105024 kubelet[2715]: W1213 01:26:42.105011 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.105024 kubelet[2715]: E1213 01:26:42.105024 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.105182 kubelet[2715]: E1213 01:26:42.105166 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.105182 kubelet[2715]: W1213 01:26:42.105179 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.105308 kubelet[2715]: E1213 01:26:42.105217 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.105308 kubelet[2715]: E1213 01:26:42.105307 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.105359 kubelet[2715]: W1213 01:26:42.105314 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.105359 kubelet[2715]: E1213 01:26:42.105328 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.105515 kubelet[2715]: E1213 01:26:42.105491 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.105515 kubelet[2715]: W1213 01:26:42.105501 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.105515 kubelet[2715]: E1213 01:26:42.105511 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.115913 kubelet[2715]: E1213 01:26:42.115886 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.115913 kubelet[2715]: W1213 01:26:42.115904 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.115913 kubelet[2715]: E1213 01:26:42.115921 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.189993 kubelet[2715]: E1213 01:26:42.189743 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:42.190646 containerd[1528]: time="2024-12-13T01:26:42.190306009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-46588,Uid:16a5324f-0b1a-4895-a52d-e1247851006f,Namespace:calico-system,Attempt:0,}" Dec 13 01:26:42.199719 kubelet[2715]: E1213 01:26:42.199694 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.199719 kubelet[2715]: W1213 01:26:42.199715 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.199824 kubelet[2715]: E1213 01:26:42.199737 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.199960 kubelet[2715]: E1213 01:26:42.199945 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.199960 kubelet[2715]: W1213 01:26:42.199956 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.200053 kubelet[2715]: E1213 01:26:42.199973 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.200156 kubelet[2715]: E1213 01:26:42.200141 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.200156 kubelet[2715]: W1213 01:26:42.200152 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.200213 kubelet[2715]: E1213 01:26:42.200168 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.200330 kubelet[2715]: E1213 01:26:42.200315 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.200330 kubelet[2715]: W1213 01:26:42.200325 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.200384 kubelet[2715]: E1213 01:26:42.200339 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.200537 kubelet[2715]: E1213 01:26:42.200523 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.200537 kubelet[2715]: W1213 01:26:42.200533 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.200916 kubelet[2715]: E1213 01:26:42.200549 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.200916 kubelet[2715]: E1213 01:26:42.200798 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.200916 kubelet[2715]: W1213 01:26:42.200812 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.200916 kubelet[2715]: E1213 01:26:42.200836 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.201186 kubelet[2715]: E1213 01:26:42.201075 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.201186 kubelet[2715]: W1213 01:26:42.201086 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.201186 kubelet[2715]: E1213 01:26:42.201106 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.201415 kubelet[2715]: E1213 01:26:42.201322 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.201415 kubelet[2715]: W1213 01:26:42.201332 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.201415 kubelet[2715]: E1213 01:26:42.201369 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.201565 kubelet[2715]: E1213 01:26:42.201553 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.201699 kubelet[2715]: W1213 01:26:42.201615 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.201699 kubelet[2715]: E1213 01:26:42.201654 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.201900 kubelet[2715]: E1213 01:26:42.201888 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.202039 kubelet[2715]: W1213 01:26:42.201940 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.202039 kubelet[2715]: E1213 01:26:42.201973 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.202173 kubelet[2715]: E1213 01:26:42.202161 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.202303 kubelet[2715]: W1213 01:26:42.202221 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.202303 kubelet[2715]: E1213 01:26:42.202250 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.202466 kubelet[2715]: E1213 01:26:42.202456 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.202526 kubelet[2715]: W1213 01:26:42.202517 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.202713 kubelet[2715]: E1213 01:26:42.202630 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.202814 kubelet[2715]: E1213 01:26:42.202802 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.202872 kubelet[2715]: W1213 01:26:42.202862 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.202995 kubelet[2715]: E1213 01:26:42.202944 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.203230 kubelet[2715]: E1213 01:26:42.203145 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.203230 kubelet[2715]: W1213 01:26:42.203156 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.203230 kubelet[2715]: E1213 01:26:42.203183 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.203443 kubelet[2715]: E1213 01:26:42.203375 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.203443 kubelet[2715]: W1213 01:26:42.203386 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.203443 kubelet[2715]: E1213 01:26:42.203410 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.203783 kubelet[2715]: E1213 01:26:42.203692 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.203783 kubelet[2715]: W1213 01:26:42.203704 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.203783 kubelet[2715]: E1213 01:26:42.203734 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.203975 kubelet[2715]: E1213 01:26:42.203950 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.204028 kubelet[2715]: W1213 01:26:42.204017 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.204153 kubelet[2715]: E1213 01:26:42.204104 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.204245 kubelet[2715]: E1213 01:26:42.204234 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.204424 kubelet[2715]: W1213 01:26:42.204282 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.204424 kubelet[2715]: E1213 01:26:42.204319 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.204557 kubelet[2715]: E1213 01:26:42.204546 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.204624 kubelet[2715]: W1213 01:26:42.204594 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.204707 kubelet[2715]: E1213 01:26:42.204696 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.204983 kubelet[2715]: E1213 01:26:42.204965 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.204983 kubelet[2715]: W1213 01:26:42.204980 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.205056 kubelet[2715]: E1213 01:26:42.205000 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.205230 kubelet[2715]: E1213 01:26:42.205191 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.205230 kubelet[2715]: W1213 01:26:42.205222 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.205230 kubelet[2715]: E1213 01:26:42.205260 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.205230 kubelet[2715]: E1213 01:26:42.205379 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.205230 kubelet[2715]: W1213 01:26:42.205389 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.205230 kubelet[2715]: E1213 01:26:42.205424 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.205888 kubelet[2715]: E1213 01:26:42.205866 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.205888 kubelet[2715]: W1213 01:26:42.205881 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.205951 kubelet[2715]: E1213 01:26:42.205902 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.206807 kubelet[2715]: E1213 01:26:42.206789 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.206857 kubelet[2715]: W1213 01:26:42.206805 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.206901 kubelet[2715]: E1213 01:26:42.206888 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.207115 kubelet[2715]: E1213 01:26:42.207100 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.207115 kubelet[2715]: W1213 01:26:42.207114 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.207170 kubelet[2715]: E1213 01:26:42.207127 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.215218 kubelet[2715]: E1213 01:26:42.215198 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:42.215218 kubelet[2715]: W1213 01:26:42.215215 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:42.215315 kubelet[2715]: E1213 01:26:42.215230 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:42.230343 containerd[1528]: time="2024-12-13T01:26:42.230196365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:42.230343 containerd[1528]: time="2024-12-13T01:26:42.230258172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:42.230343 containerd[1528]: time="2024-12-13T01:26:42.230271093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:42.230460 containerd[1528]: time="2024-12-13T01:26:42.230357304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:42.263153 containerd[1528]: time="2024-12-13T01:26:42.263121097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-46588,Uid:16a5324f-0b1a-4895-a52d-e1247851006f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e7375594888a6cfa646533f05df9bc147cece5d87ad11a5067d1f73efe658d6\"" Dec 13 01:26:42.263747 kubelet[2715]: E1213 01:26:42.263729 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:43.033024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968922557.mount: Deactivated successfully. Dec 13 01:26:43.654898 containerd[1528]: time="2024-12-13T01:26:43.654854246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:43.655917 containerd[1528]: time="2024-12-13T01:26:43.655774791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:26:43.656723 containerd[1528]: time="2024-12-13T01:26:43.656683574Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:43.658743 containerd[1528]: time="2024-12-13T01:26:43.658712925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:43.659658 containerd[1528]: time="2024-12-13T01:26:43.659625549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.589714093s" Dec 13 01:26:43.659719 containerd[1528]: time="2024-12-13T01:26:43.659659793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:26:43.661408 containerd[1528]: time="2024-12-13T01:26:43.661179686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:26:43.668686 containerd[1528]: time="2024-12-13T01:26:43.668651776Z" level=info msg="CreateContainer within sandbox \"ee363de6fc11a298bf9d5e91311283983c6f7cf349b0d8819ca5950775123ca2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:26:43.679868 containerd[1528]: time="2024-12-13T01:26:43.679824808Z" level=info msg="CreateContainer within sandbox \"ee363de6fc11a298bf9d5e91311283983c6f7cf349b0d8819ca5950775123ca2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"13634cb7db04ce9c124864eebf5aa7c61c0f8c329f33112f02fd83bf8c187b84\"" Dec 13 01:26:43.680351 containerd[1528]: time="2024-12-13T01:26:43.680313303Z" level=info msg="StartContainer for \"13634cb7db04ce9c124864eebf5aa7c61c0f8c329f33112f02fd83bf8c187b84\"" Dec 13 01:26:43.730530 containerd[1528]: time="2024-12-13T01:26:43.730487293Z" level=info msg="StartContainer for \"13634cb7db04ce9c124864eebf5aa7c61c0f8c329f33112f02fd83bf8c187b84\" returns successfully" Dec 13 01:26:44.054649 kubelet[2715]: E1213 01:26:44.054330 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmdpd" podUID="f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7" Dec 13 01:26:44.124133 kubelet[2715]: E1213 01:26:44.124103 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:44.212490 kubelet[2715]: E1213 01:26:44.212420 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.212490 kubelet[2715]: W1213 01:26:44.212440 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.212490 kubelet[2715]: E1213 01:26:44.212462 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.212873 kubelet[2715]: E1213 01:26:44.212672 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.212873 kubelet[2715]: W1213 01:26:44.212681 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.212873 kubelet[2715]: E1213 01:26:44.212694 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.213079 kubelet[2715]: E1213 01:26:44.213064 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.213079 kubelet[2715]: W1213 01:26:44.213076 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.213157 kubelet[2715]: E1213 01:26:44.213092 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.213327 kubelet[2715]: E1213 01:26:44.213309 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.213327 kubelet[2715]: W1213 01:26:44.213325 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.213395 kubelet[2715]: E1213 01:26:44.213337 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.213547 kubelet[2715]: E1213 01:26:44.213534 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.213547 kubelet[2715]: W1213 01:26:44.213546 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.213635 kubelet[2715]: E1213 01:26:44.213558 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.213754 kubelet[2715]: E1213 01:26:44.213738 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.213754 kubelet[2715]: W1213 01:26:44.213748 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.213809 kubelet[2715]: E1213 01:26:44.213760 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.213945 kubelet[2715]: E1213 01:26:44.213912 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.213945 kubelet[2715]: W1213 01:26:44.213923 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.213945 kubelet[2715]: E1213 01:26:44.213933 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.214089 kubelet[2715]: E1213 01:26:44.214077 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.214089 kubelet[2715]: W1213 01:26:44.214087 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.214089 kubelet[2715]: E1213 01:26:44.214098 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.214285 kubelet[2715]: E1213 01:26:44.214273 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.214285 kubelet[2715]: W1213 01:26:44.214284 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.214360 kubelet[2715]: E1213 01:26:44.214294 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.214447 kubelet[2715]: E1213 01:26:44.214428 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.214447 kubelet[2715]: W1213 01:26:44.214445 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.214514 kubelet[2715]: E1213 01:26:44.214456 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.214667 kubelet[2715]: E1213 01:26:44.214656 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.214667 kubelet[2715]: W1213 01:26:44.214666 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.214740 kubelet[2715]: E1213 01:26:44.214677 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.214851 kubelet[2715]: E1213 01:26:44.214840 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.214893 kubelet[2715]: W1213 01:26:44.214858 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.214893 kubelet[2715]: E1213 01:26:44.214877 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.215034 kubelet[2715]: E1213 01:26:44.215016 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.215034 kubelet[2715]: W1213 01:26:44.215032 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.215090 kubelet[2715]: E1213 01:26:44.215042 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.215219 kubelet[2715]: E1213 01:26:44.215206 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.215219 kubelet[2715]: W1213 01:26:44.215217 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.215272 kubelet[2715]: E1213 01:26:44.215228 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.215397 kubelet[2715]: E1213 01:26:44.215385 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.215397 kubelet[2715]: W1213 01:26:44.215396 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.215457 kubelet[2715]: E1213 01:26:44.215407 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.217683 kubelet[2715]: E1213 01:26:44.217663 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.217683 kubelet[2715]: W1213 01:26:44.217679 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.217799 kubelet[2715]: E1213 01:26:44.217694 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.217926 kubelet[2715]: E1213 01:26:44.217910 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.217926 kubelet[2715]: W1213 01:26:44.217922 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.217977 kubelet[2715]: E1213 01:26:44.217938 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.218155 kubelet[2715]: E1213 01:26:44.218140 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.218155 kubelet[2715]: W1213 01:26:44.218151 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.218218 kubelet[2715]: E1213 01:26:44.218167 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.218386 kubelet[2715]: E1213 01:26:44.218373 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.218386 kubelet[2715]: W1213 01:26:44.218384 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.218456 kubelet[2715]: E1213 01:26:44.218401 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.218551 kubelet[2715]: E1213 01:26:44.218539 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.218551 kubelet[2715]: W1213 01:26:44.218549 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.218603 kubelet[2715]: E1213 01:26:44.218562 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.218761 kubelet[2715]: E1213 01:26:44.218735 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.218761 kubelet[2715]: W1213 01:26:44.218745 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.218761 kubelet[2715]: E1213 01:26:44.218765 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.218990 kubelet[2715]: E1213 01:26:44.218976 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.218990 kubelet[2715]: W1213 01:26:44.218986 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.219050 kubelet[2715]: E1213 01:26:44.219022 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.219146 kubelet[2715]: E1213 01:26:44.219134 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.219146 kubelet[2715]: W1213 01:26:44.219143 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.219275 kubelet[2715]: E1213 01:26:44.219191 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.219309 kubelet[2715]: E1213 01:26:44.219291 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.219309 kubelet[2715]: W1213 01:26:44.219298 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.219361 kubelet[2715]: E1213 01:26:44.219314 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.219500 kubelet[2715]: E1213 01:26:44.219478 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.219500 kubelet[2715]: W1213 01:26:44.219487 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.219500 kubelet[2715]: E1213 01:26:44.219501 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.219663 kubelet[2715]: E1213 01:26:44.219652 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.219663 kubelet[2715]: W1213 01:26:44.219662 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.219719 kubelet[2715]: E1213 01:26:44.219672 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.219825 kubelet[2715]: E1213 01:26:44.219814 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.219825 kubelet[2715]: W1213 01:26:44.219824 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.219886 kubelet[2715]: E1213 01:26:44.219837 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.220250 kubelet[2715]: E1213 01:26:44.220236 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.220250 kubelet[2715]: W1213 01:26:44.220249 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.220300 kubelet[2715]: E1213 01:26:44.220264 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.220431 kubelet[2715]: E1213 01:26:44.220421 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.220431 kubelet[2715]: W1213 01:26:44.220431 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.220490 kubelet[2715]: E1213 01:26:44.220445 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.220641 kubelet[2715]: E1213 01:26:44.220631 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.220667 kubelet[2715]: W1213 01:26:44.220641 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.220667 kubelet[2715]: E1213 01:26:44.220651 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.221314 kubelet[2715]: E1213 01:26:44.221283 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.221314 kubelet[2715]: W1213 01:26:44.221299 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.221314 kubelet[2715]: E1213 01:26:44.221314 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.221534 kubelet[2715]: E1213 01:26:44.221521 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.221534 kubelet[2715]: W1213 01:26:44.221533 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.221584 kubelet[2715]: E1213 01:26:44.221546 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:44.221911 kubelet[2715]: E1213 01:26:44.221838 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:26:44.221911 kubelet[2715]: W1213 01:26:44.221855 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:26:44.221911 kubelet[2715]: E1213 01:26:44.221876 2715 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:26:45.066619 containerd[1528]: time="2024-12-13T01:26:45.066553253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:45.067299 containerd[1528]: time="2024-12-13T01:26:45.067232885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:26:45.067949 containerd[1528]: time="2024-12-13T01:26:45.067920278Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:45.069691 containerd[1528]: time="2024-12-13T01:26:45.069661902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:45.070400 containerd[1528]: time="2024-12-13T01:26:45.070363616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.409142245s" Dec 13 01:26:45.070448 containerd[1528]: time="2024-12-13T01:26:45.070401020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:26:45.072249 containerd[1528]: time="2024-12-13T01:26:45.072084118Z" level=info msg="CreateContainer within sandbox \"2e7375594888a6cfa646533f05df9bc147cece5d87ad11a5067d1f73efe658d6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:26:45.088587 containerd[1528]: time="2024-12-13T01:26:45.088547779Z" level=info msg="CreateContainer within sandbox \"2e7375594888a6cfa646533f05df9bc147cece5d87ad11a5067d1f73efe658d6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"79c43b577fecb4c14b522571226daedb0a4ae4da360eb4442c18bc0b5d3fe4e7\"" Dec 13 01:26:45.089425 containerd[1528]: time="2024-12-13T01:26:45.089391628Z" level=info msg="StartContainer for \"79c43b577fecb4c14b522571226daedb0a4ae4da360eb4442c18bc0b5d3fe4e7\"" Dec 13 01:26:45.128633 kubelet[2715]: I1213 01:26:45.128565 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:26:45.129317 kubelet[2715]: E1213 01:26:45.129282 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:45.144620 containerd[1528]: time="2024-12-13T01:26:45.144576984Z" level=info msg="StartContainer for \"79c43b577fecb4c14b522571226daedb0a4ae4da360eb4442c18bc0b5d3fe4e7\" returns successfully" Dec 13 01:26:45.181791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79c43b577fecb4c14b522571226daedb0a4ae4da360eb4442c18bc0b5d3fe4e7-rootfs.mount: Deactivated successfully. Dec 13 01:26:45.203029 containerd[1528]: time="2024-12-13T01:26:45.199404862Z" level=info msg="shim disconnected" id=79c43b577fecb4c14b522571226daedb0a4ae4da360eb4442c18bc0b5d3fe4e7 namespace=k8s.io Dec 13 01:26:45.203346 containerd[1528]: time="2024-12-13T01:26:45.203189543Z" level=warning msg="cleaning up after shim disconnected" id=79c43b577fecb4c14b522571226daedb0a4ae4da360eb4442c18bc0b5d3fe4e7 namespace=k8s.io Dec 13 01:26:45.203346 containerd[1528]: time="2024-12-13T01:26:45.203210465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:26:46.053951 kubelet[2715]: E1213 01:26:46.053909 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmdpd" podUID="f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7" Dec 13 01:26:46.129461 kubelet[2715]: E1213 01:26:46.129423 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:46.130968 containerd[1528]: time="2024-12-13T01:26:46.130731364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:26:46.144670 kubelet[2715]: I1213 01:26:46.144626 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7fbd5c88f4-fjfks" podStartSLOduration=3.553443569 podStartE2EDuration="5.143950394s" podCreationTimestamp="2024-12-13 01:26:41 +0000 UTC" firstStartedPulling="2024-12-13 01:26:42.069383074 +0000 UTC m=+24.105198063" lastFinishedPulling="2024-12-13 01:26:43.659889859 +0000 UTC m=+25.695704888" observedRunningTime="2024-12-13 01:26:44.132842812 +0000 UTC m=+26.168657841" watchObservedRunningTime="2024-12-13 01:26:46.143950394 +0000 UTC m=+28.179765423" Dec 13 01:26:47.412410 kubelet[2715]: I1213 01:26:47.412258 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:26:47.413489 kubelet[2715]: E1213 01:26:47.413429 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:47.638848 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:60780.service - OpenSSH per-connection server daemon (10.0.0.1:60780). Dec 13 01:26:47.674883 sshd[3389]: Accepted publickey for core from 10.0.0.1 port 60780 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:26:47.676720 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:47.680687 systemd-logind[1510]: New session 8 of user core. Dec 13 01:26:47.689041 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:26:47.829919 sshd[3389]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:47.833254 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:60780.service: Deactivated successfully. Dec 13 01:26:47.836486 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:26:47.838063 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:26:47.839490 systemd-logind[1510]: Removed session 8. Dec 13 01:26:48.054697 kubelet[2715]: E1213 01:26:48.054659 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmdpd" podUID="f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7" Dec 13 01:26:48.133863 kubelet[2715]: E1213 01:26:48.133830 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:48.785524 containerd[1528]: time="2024-12-13T01:26:48.785469973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:48.786149 containerd[1528]: time="2024-12-13T01:26:48.786113355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:26:48.786757 containerd[1528]: time="2024-12-13T01:26:48.786703171Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:48.789138 containerd[1528]: time="2024-12-13T01:26:48.788931344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:48.790004 containerd[1528]: time="2024-12-13T01:26:48.789874754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.659076263s" Dec 13 01:26:48.790004 containerd[1528]: time="2024-12-13T01:26:48.789914437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:26:48.793072 containerd[1528]: time="2024-12-13T01:26:48.793038096Z" level=info msg="CreateContainer within sandbox \"2e7375594888a6cfa646533f05df9bc147cece5d87ad11a5067d1f73efe658d6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:26:48.818984 containerd[1528]: time="2024-12-13T01:26:48.818886284Z" level=info msg="CreateContainer within sandbox \"2e7375594888a6cfa646533f05df9bc147cece5d87ad11a5067d1f73efe658d6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d7ff8532a081ca627af40c782c3da054fdf88fb46f6d9af22128e6f778e2ffc5\"" Dec 13 01:26:48.819676 containerd[1528]: time="2024-12-13T01:26:48.819328526Z" level=info msg="StartContainer for \"d7ff8532a081ca627af40c782c3da054fdf88fb46f6d9af22128e6f778e2ffc5\"" Dec 13 01:26:48.870977 containerd[1528]: time="2024-12-13T01:26:48.870937414Z" level=info msg="StartContainer for \"d7ff8532a081ca627af40c782c3da054fdf88fb46f6d9af22128e6f778e2ffc5\" returns successfully" Dec 13 01:26:49.141428 kubelet[2715]: E1213 01:26:49.139644 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:49.512464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7ff8532a081ca627af40c782c3da054fdf88fb46f6d9af22128e6f778e2ffc5-rootfs.mount: Deactivated successfully. Dec 13 01:26:49.515877 kubelet[2715]: I1213 01:26:49.515851 2715 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:26:49.538066 containerd[1528]: time="2024-12-13T01:26:49.537990570Z" level=info msg="shim disconnected" id=d7ff8532a081ca627af40c782c3da054fdf88fb46f6d9af22128e6f778e2ffc5 namespace=k8s.io Dec 13 01:26:49.538066 containerd[1528]: time="2024-12-13T01:26:49.538060497Z" level=warning msg="cleaning up after shim disconnected" id=d7ff8532a081ca627af40c782c3da054fdf88fb46f6d9af22128e6f778e2ffc5 namespace=k8s.io Dec 13 01:26:49.538066 containerd[1528]: time="2024-12-13T01:26:49.538070018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:26:49.552820 kubelet[2715]: I1213 01:26:49.552774 2715 topology_manager.go:215] "Topology Admit Handler" podUID="931767d7-5830-4f3b-991c-c63e121572c9" podNamespace="kube-system" podName="coredns-76f75df574-l5x25" Dec 13 01:26:49.556135 kubelet[2715]: I1213 01:26:49.556110 2715 topology_manager.go:215] "Topology Admit Handler" podUID="ebfbe69b-4807-4a90-8634-a91c3bc497ca" podNamespace="calico-apiserver" podName="calico-apiserver-b7d896755-dlt9b" Dec 13 01:26:49.558308 kubelet[2715]: I1213 01:26:49.558255 2715 topology_manager.go:215] "Topology Admit Handler" podUID="f2e87485-5f8e-4164-9df5-1329c4f71d1a" podNamespace="calico-system" podName="calico-kube-controllers-84b598996-gt9q4" Dec 13 01:26:49.560308 kubelet[2715]: I1213 01:26:49.560180 2715 topology_manager.go:215] "Topology Admit Handler" podUID="f0cd5edf-7433-4815-a273-dae6acb01eb3" podNamespace="calico-apiserver" podName="calico-apiserver-b7d896755-5fn2z" Dec 13 01:26:49.560775 kubelet[2715]: I1213 01:26:49.560513 2715 topology_manager.go:215] "Topology Admit Handler" podUID="00ee66f4-a276-4315-8517-eae981e857e4" podNamespace="kube-system" podName="coredns-76f75df574-pfwc5" Dec 13 01:26:49.564591 containerd[1528]: time="2024-12-13T01:26:49.564343487Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:26:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:26:49.653179 kubelet[2715]: I1213 01:26:49.653136 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb54d\" (UniqueName: \"kubernetes.io/projected/f2e87485-5f8e-4164-9df5-1329c4f71d1a-kube-api-access-bb54d\") pod \"calico-kube-controllers-84b598996-gt9q4\" (UID: \"f2e87485-5f8e-4164-9df5-1329c4f71d1a\") " pod="calico-system/calico-kube-controllers-84b598996-gt9q4" Dec 13 01:26:49.653179 kubelet[2715]: I1213 01:26:49.653185 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9h78\" (UniqueName: \"kubernetes.io/projected/f0cd5edf-7433-4815-a273-dae6acb01eb3-kube-api-access-w9h78\") pod \"calico-apiserver-b7d896755-5fn2z\" (UID: \"f0cd5edf-7433-4815-a273-dae6acb01eb3\") " pod="calico-apiserver/calico-apiserver-b7d896755-5fn2z" Dec 13 01:26:49.653354 kubelet[2715]: I1213 01:26:49.653208 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00ee66f4-a276-4315-8517-eae981e857e4-config-volume\") pod \"coredns-76f75df574-pfwc5\" (UID: \"00ee66f4-a276-4315-8517-eae981e857e4\") " pod="kube-system/coredns-76f75df574-pfwc5" Dec 13 01:26:49.653354 kubelet[2715]: I1213 01:26:49.653233 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhd2d\" (UniqueName: \"kubernetes.io/projected/ebfbe69b-4807-4a90-8634-a91c3bc497ca-kube-api-access-vhd2d\") pod \"calico-apiserver-b7d896755-dlt9b\" (UID: \"ebfbe69b-4807-4a90-8634-a91c3bc497ca\") " pod="calico-apiserver/calico-apiserver-b7d896755-dlt9b" Dec 13 01:26:49.653354 kubelet[2715]: I1213 01:26:49.653255 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5sg\" (UniqueName: \"kubernetes.io/projected/931767d7-5830-4f3b-991c-c63e121572c9-kube-api-access-sd5sg\") pod \"coredns-76f75df574-l5x25\" (UID: \"931767d7-5830-4f3b-991c-c63e121572c9\") " pod="kube-system/coredns-76f75df574-l5x25" Dec 13 01:26:49.653354 kubelet[2715]: I1213 01:26:49.653275 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ebfbe69b-4807-4a90-8634-a91c3bc497ca-calico-apiserver-certs\") pod \"calico-apiserver-b7d896755-dlt9b\" (UID: \"ebfbe69b-4807-4a90-8634-a91c3bc497ca\") " pod="calico-apiserver/calico-apiserver-b7d896755-dlt9b" Dec 13 01:26:49.653354 kubelet[2715]: I1213 01:26:49.653299 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0cd5edf-7433-4815-a273-dae6acb01eb3-calico-apiserver-certs\") pod \"calico-apiserver-b7d896755-5fn2z\" (UID: \"f0cd5edf-7433-4815-a273-dae6acb01eb3\") " pod="calico-apiserver/calico-apiserver-b7d896755-5fn2z" Dec 13 01:26:49.653493 kubelet[2715]: I1213 01:26:49.653319 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e87485-5f8e-4164-9df5-1329c4f71d1a-tigera-ca-bundle\") pod \"calico-kube-controllers-84b598996-gt9q4\" (UID: \"f2e87485-5f8e-4164-9df5-1329c4f71d1a\") " pod="calico-system/calico-kube-controllers-84b598996-gt9q4" Dec 13 01:26:49.653493 kubelet[2715]: I1213 01:26:49.653342 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmkp\" (UniqueName: \"kubernetes.io/projected/00ee66f4-a276-4315-8517-eae981e857e4-kube-api-access-snmkp\") pod \"coredns-76f75df574-pfwc5\" (UID: \"00ee66f4-a276-4315-8517-eae981e857e4\") " pod="kube-system/coredns-76f75df574-pfwc5" Dec 13 01:26:49.653493 kubelet[2715]: I1213 01:26:49.653362 2715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/931767d7-5830-4f3b-991c-c63e121572c9-config-volume\") pod \"coredns-76f75df574-l5x25\" (UID: \"931767d7-5830-4f3b-991c-c63e121572c9\") " pod="kube-system/coredns-76f75df574-l5x25" Dec 13 01:26:49.863043 kubelet[2715]: E1213 01:26:49.862871 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:49.863418 containerd[1528]: time="2024-12-13T01:26:49.863358539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l5x25,Uid:931767d7-5830-4f3b-991c-c63e121572c9,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:49.868940 containerd[1528]: time="2024-12-13T01:26:49.868894771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b598996-gt9q4,Uid:f2e87485-5f8e-4164-9df5-1329c4f71d1a,Namespace:calico-system,Attempt:0,}" Dec 13 01:26:49.870667 containerd[1528]: time="2024-12-13T01:26:49.870631332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7d896755-dlt9b,Uid:ebfbe69b-4807-4a90-8634-a91c3bc497ca,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:26:49.872200 containerd[1528]: time="2024-12-13T01:26:49.872147392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7d896755-5fn2z,Uid:f0cd5edf-7433-4815-a273-dae6acb01eb3,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:26:49.875564 kubelet[2715]: E1213 01:26:49.875509 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:49.876176 containerd[1528]: time="2024-12-13T01:26:49.876139681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pfwc5,Uid:00ee66f4-a276-4315-8517-eae981e857e4,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:50.061760 containerd[1528]: time="2024-12-13T01:26:50.061332406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmdpd,Uid:f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7,Namespace:calico-system,Attempt:0,}" Dec 13 01:26:50.183561 kubelet[2715]: E1213 01:26:50.182424 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:50.204281 containerd[1528]: time="2024-12-13T01:26:50.204118849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:26:50.375809 containerd[1528]: time="2024-12-13T01:26:50.375759558Z" level=error msg="Failed to destroy network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.376315 containerd[1528]: time="2024-12-13T01:26:50.376083467Z" level=error msg="encountered an error cleaning up failed sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.376315 containerd[1528]: time="2024-12-13T01:26:50.376148113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l5x25,Uid:931767d7-5830-4f3b-991c-c63e121572c9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.376400 containerd[1528]: time="2024-12-13T01:26:50.376290646Z" level=error msg="Failed to destroy network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.376711 containerd[1528]: time="2024-12-13T01:26:50.376634196Z" level=error msg="encountered an error cleaning up failed sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.376711 containerd[1528]: time="2024-12-13T01:26:50.376679480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b598996-gt9q4,Uid:f2e87485-5f8e-4164-9df5-1329c4f71d1a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.379665 kubelet[2715]: E1213 01:26:50.379569 2715 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.379665 kubelet[2715]: E1213 01:26:50.379664 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l5x25" Dec 13 01:26:50.379778 kubelet[2715]: E1213 01:26:50.379687 2715 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l5x25" Dec 13 01:26:50.379778 kubelet[2715]: E1213 01:26:50.379755 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-l5x25_kube-system(931767d7-5830-4f3b-991c-c63e121572c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-l5x25_kube-system(931767d7-5830-4f3b-991c-c63e121572c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l5x25" podUID="931767d7-5830-4f3b-991c-c63e121572c9" Dec 13 01:26:50.379858 kubelet[2715]: E1213 01:26:50.379803 2715 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.380271 kubelet[2715]: E1213 01:26:50.379890 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84b598996-gt9q4" Dec 13 01:26:50.380271 kubelet[2715]: E1213 01:26:50.379919 2715 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84b598996-gt9q4" Dec 13 01:26:50.380271 kubelet[2715]: E1213 01:26:50.380086 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84b598996-gt9q4_calico-system(f2e87485-5f8e-4164-9df5-1329c4f71d1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84b598996-gt9q4_calico-system(f2e87485-5f8e-4164-9df5-1329c4f71d1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84b598996-gt9q4" podUID="f2e87485-5f8e-4164-9df5-1329c4f71d1a" Dec 13 01:26:50.386896 containerd[1528]: time="2024-12-13T01:26:50.386832871Z" level=error msg="Failed to destroy network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.387164 containerd[1528]: time="2024-12-13T01:26:50.387127897Z" level=error msg="encountered an error cleaning up failed sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.387210 containerd[1528]: time="2024-12-13T01:26:50.387171861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmdpd,Uid:f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.387417 kubelet[2715]: E1213 01:26:50.387362 2715 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.387417 kubelet[2715]: E1213 01:26:50.387416 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmdpd" Dec 13 01:26:50.387515 kubelet[2715]: E1213 01:26:50.387436 2715 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmdpd" Dec 13 01:26:50.387515 kubelet[2715]: E1213 01:26:50.387487 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmdpd_calico-system(f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmdpd_calico-system(f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmdpd" podUID="f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7" Dec 13 01:26:50.390309 containerd[1528]: time="2024-12-13T01:26:50.390262298Z" level=error msg="Failed to destroy network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.391626 containerd[1528]: time="2024-12-13T01:26:50.390551764Z" level=error msg="encountered an error cleaning up failed sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.391626 containerd[1528]: time="2024-12-13T01:26:50.390586087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pfwc5,Uid:00ee66f4-a276-4315-8517-eae981e857e4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.391691 kubelet[2715]: E1213 01:26:50.390769 2715 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.391691 kubelet[2715]: E1213 01:26:50.390816 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pfwc5" Dec 13 01:26:50.391691 kubelet[2715]: E1213 01:26:50.390833 2715 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pfwc5" Dec 13 01:26:50.391762 kubelet[2715]: E1213 01:26:50.390875 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pfwc5_kube-system(00ee66f4-a276-4315-8517-eae981e857e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pfwc5_kube-system(00ee66f4-a276-4315-8517-eae981e857e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pfwc5" podUID="00ee66f4-a276-4315-8517-eae981e857e4" Dec 13 01:26:50.396500 containerd[1528]: time="2024-12-13T01:26:50.396428051Z" level=error msg="Failed to destroy network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.397856 containerd[1528]: time="2024-12-13T01:26:50.397416780Z" level=error msg="encountered an error cleaning up failed sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.397856 containerd[1528]: time="2024-12-13T01:26:50.397470305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7d896755-5fn2z,Uid:f0cd5edf-7433-4815-a273-dae6acb01eb3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.398108 kubelet[2715]: E1213 01:26:50.397664 2715 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.398108 kubelet[2715]: E1213 01:26:50.397729 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b7d896755-5fn2z" Dec 13 01:26:50.398108 kubelet[2715]: E1213 01:26:50.397747 2715 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b7d896755-5fn2z" Dec 13 01:26:50.398203 kubelet[2715]: E1213 01:26:50.397811 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b7d896755-5fn2z_calico-apiserver(f0cd5edf-7433-4815-a273-dae6acb01eb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b7d896755-5fn2z_calico-apiserver(f0cd5edf-7433-4815-a273-dae6acb01eb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b7d896755-5fn2z" podUID="f0cd5edf-7433-4815-a273-dae6acb01eb3" Dec 13 01:26:50.403644 containerd[1528]: time="2024-12-13T01:26:50.403550890Z" level=error msg="Failed to destroy network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.403891 containerd[1528]: time="2024-12-13T01:26:50.403854757Z" level=error msg="encountered an error cleaning up failed sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.403934 containerd[1528]: time="2024-12-13T01:26:50.403901361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7d896755-dlt9b,Uid:ebfbe69b-4807-4a90-8634-a91c3bc497ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.404101 kubelet[2715]: E1213 01:26:50.404069 2715 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:50.404139 kubelet[2715]: E1213 01:26:50.404116 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b7d896755-dlt9b" Dec 13 01:26:50.404139 kubelet[2715]: E1213 01:26:50.404136 2715 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b7d896755-dlt9b" Dec 13 01:26:50.404199 kubelet[2715]: E1213 01:26:50.404187 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b7d896755-dlt9b_calico-apiserver(ebfbe69b-4807-4a90-8634-a91c3bc497ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b7d896755-dlt9b_calico-apiserver(ebfbe69b-4807-4a90-8634-a91c3bc497ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b7d896755-dlt9b" podUID="ebfbe69b-4807-4a90-8634-a91c3bc497ca" Dec 13 01:26:50.815467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037-shm.mount: Deactivated successfully. Dec 13 01:26:50.815656 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2-shm.mount: Deactivated successfully. Dec 13 01:26:51.186534 kubelet[2715]: I1213 01:26:51.185841 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:26:51.186863 containerd[1528]: time="2024-12-13T01:26:51.186450515Z" level=info msg="StopPodSandbox for \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\"" Dec 13 01:26:51.186863 containerd[1528]: time="2024-12-13T01:26:51.186632050Z" level=info msg="Ensure that sandbox d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1 in task-service has been cleanup successfully" Dec 13 01:26:51.190124 kubelet[2715]: I1213 01:26:51.187119 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:26:51.190212 containerd[1528]: time="2024-12-13T01:26:51.187522808Z" level=info msg="StopPodSandbox for \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\"" Dec 13 01:26:51.190212 containerd[1528]: time="2024-12-13T01:26:51.187729026Z" level=info msg="Ensure that sandbox 7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2 in task-service has been cleanup successfully" Dec 13 01:26:51.190646 kubelet[2715]: I1213 01:26:51.190628 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:26:51.191187 containerd[1528]: time="2024-12-13T01:26:51.191064516Z" level=info msg="StopPodSandbox for \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\"" Dec 13 01:26:51.191452 containerd[1528]: time="2024-12-13T01:26:51.191353021Z" level=info msg="Ensure that sandbox 55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037 in task-service has been cleanup successfully" Dec 13 01:26:51.192750 kubelet[2715]: I1213 01:26:51.192445 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:26:51.193999 containerd[1528]: time="2024-12-13T01:26:51.193246666Z" level=info msg="StopPodSandbox for \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\"" Dec 13 01:26:51.193999 containerd[1528]: time="2024-12-13T01:26:51.193396319Z" level=info msg="Ensure that sandbox 5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36 in task-service has been cleanup successfully" Dec 13 01:26:51.196662 kubelet[2715]: I1213 01:26:51.196642 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:26:51.197382 containerd[1528]: time="2024-12-13T01:26:51.197105122Z" level=info msg="StopPodSandbox for \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\"" Dec 13 01:26:51.197382 containerd[1528]: time="2024-12-13T01:26:51.197268736Z" level=info msg="Ensure that sandbox 2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b in task-service has been cleanup successfully" Dec 13 01:26:51.199783 kubelet[2715]: I1213 01:26:51.199725 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:26:51.200448 containerd[1528]: time="2024-12-13T01:26:51.200326962Z" level=info msg="StopPodSandbox for \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\"" Dec 13 01:26:51.200602 containerd[1528]: time="2024-12-13T01:26:51.200576584Z" level=info msg="Ensure that sandbox af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750 in task-service has been cleanup successfully" Dec 13 01:26:51.239596 containerd[1528]: time="2024-12-13T01:26:51.239518493Z" level=error msg="StopPodSandbox for \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\" failed" error="failed to destroy network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:51.239883 kubelet[2715]: E1213 01:26:51.239847 2715 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:26:51.239947 kubelet[2715]: E1213 01:26:51.239933 2715 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b"} Dec 13 01:26:51.239985 kubelet[2715]: E1213 01:26:51.239968 2715 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00ee66f4-a276-4315-8517-eae981e857e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:26:51.240043 kubelet[2715]: E1213 01:26:51.240003 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00ee66f4-a276-4315-8517-eae981e857e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pfwc5" podUID="00ee66f4-a276-4315-8517-eae981e857e4" Dec 13 01:26:51.240499 containerd[1528]: time="2024-12-13T01:26:51.240462335Z" level=error msg="StopPodSandbox for \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\" failed" error="failed to destroy network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:51.240678 kubelet[2715]: E1213 01:26:51.240651 2715 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:26:51.240780 kubelet[2715]: E1213 01:26:51.240688 2715 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037"} Dec 13 01:26:51.240780 kubelet[2715]: E1213 01:26:51.240720 2715 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2e87485-5f8e-4164-9df5-1329c4f71d1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:26:51.240780 kubelet[2715]: E1213 01:26:51.240745 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2e87485-5f8e-4164-9df5-1329c4f71d1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84b598996-gt9q4" podUID="f2e87485-5f8e-4164-9df5-1329c4f71d1a" Dec 13 01:26:51.250629 containerd[1528]: time="2024-12-13T01:26:51.249894395Z" level=error msg="StopPodSandbox for \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\" failed" error="failed to destroy network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:51.250732 kubelet[2715]: E1213 01:26:51.250124 2715 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:26:51.250732 kubelet[2715]: E1213 01:26:51.250158 2715 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2"} Dec 13 01:26:51.250732 kubelet[2715]: E1213 01:26:51.250193 2715 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"931767d7-5830-4f3b-991c-c63e121572c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:26:51.250732 kubelet[2715]: E1213 01:26:51.250219 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"931767d7-5830-4f3b-991c-c63e121572c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l5x25" podUID="931767d7-5830-4f3b-991c-c63e121572c9" Dec 13 01:26:51.252326 containerd[1528]: time="2024-12-13T01:26:51.252290404Z" level=error msg="StopPodSandbox for \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\" failed" error="failed to destroy network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:51.252551 kubelet[2715]: E1213 01:26:51.252503 2715 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:26:51.252601 kubelet[2715]: E1213 01:26:51.252562 2715 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1"} Dec 13 01:26:51.252601 kubelet[2715]: E1213 01:26:51.252594 2715 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:26:51.252697 kubelet[2715]: E1213 01:26:51.252662 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmdpd" podUID="f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7" Dec 13 01:26:51.254266 containerd[1528]: time="2024-12-13T01:26:51.254233173Z" level=error msg="StopPodSandbox for \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\" failed" error="failed to destroy network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:51.254404 kubelet[2715]: E1213 01:26:51.254383 2715 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:26:51.254441 kubelet[2715]: E1213 01:26:51.254414 2715 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750"} Dec 13 01:26:51.254463 kubelet[2715]: E1213 01:26:51.254444 2715 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebfbe69b-4807-4a90-8634-a91c3bc497ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:26:51.254502 kubelet[2715]: E1213 01:26:51.254469 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebfbe69b-4807-4a90-8634-a91c3bc497ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b7d896755-dlt9b" podUID="ebfbe69b-4807-4a90-8634-a91c3bc497ca" Dec 13 01:26:51.258116 containerd[1528]: time="2024-12-13T01:26:51.258078508Z" level=error msg="StopPodSandbox for \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\" failed" error="failed to destroy network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:26:51.258377 kubelet[2715]: E1213 01:26:51.258359 2715 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:26:51.258444 kubelet[2715]: E1213 01:26:51.258391 2715 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36"} Dec 13 01:26:51.258444 kubelet[2715]: E1213 01:26:51.258421 2715 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0cd5edf-7433-4815-a273-dae6acb01eb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:26:51.258506 kubelet[2715]: E1213 01:26:51.258447 2715 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0cd5edf-7433-4815-a273-dae6acb01eb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b7d896755-5fn2z" podUID="f0cd5edf-7433-4815-a273-dae6acb01eb3" Dec 13 01:26:52.843895 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:34128.service - OpenSSH per-connection server daemon (10.0.0.1:34128). Dec 13 01:26:52.884826 sshd[3843]: Accepted publickey for core from 10.0.0.1 port 34128 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:26:52.886437 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:52.891795 systemd-logind[1510]: New session 9 of user core. Dec 13 01:26:52.902023 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:26:53.026629 sshd[3843]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:53.031923 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:34128.service: Deactivated successfully. Dec 13 01:26:53.035171 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:26:53.037120 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:26:53.038156 systemd-logind[1510]: Removed session 9. Dec 13 01:26:54.691698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304623498.mount: Deactivated successfully. Dec 13 01:26:54.848504 containerd[1528]: time="2024-12-13T01:26:54.848437313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:54.853231 containerd[1528]: time="2024-12-13T01:26:54.853174652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:26:54.859931 containerd[1528]: time="2024-12-13T01:26:54.859874188Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:54.861237 containerd[1528]: time="2024-12-13T01:26:54.860857667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:54.862077 containerd[1528]: time="2024-12-13T01:26:54.862001198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.657832985s" Dec 13 01:26:54.862077 containerd[1528]: time="2024-12-13T01:26:54.862032001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:26:54.895616 containerd[1528]: time="2024-12-13T01:26:54.892894071Z" level=info msg="CreateContainer within sandbox \"2e7375594888a6cfa646533f05df9bc147cece5d87ad11a5067d1f73efe658d6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:26:54.919577 containerd[1528]: time="2024-12-13T01:26:54.919528523Z" level=info msg="CreateContainer within sandbox \"2e7375594888a6cfa646533f05df9bc147cece5d87ad11a5067d1f73efe658d6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ed3e49ca16c8e5fe9333f8bd6dce1a882670f69d4af192734dffd0c0a7328b2f\"" Dec 13 01:26:54.921492 containerd[1528]: time="2024-12-13T01:26:54.921403273Z" level=info msg="StartContainer for \"ed3e49ca16c8e5fe9333f8bd6dce1a882670f69d4af192734dffd0c0a7328b2f\"" Dec 13 01:26:55.010457 containerd[1528]: time="2024-12-13T01:26:55.010417901Z" level=info msg="StartContainer for \"ed3e49ca16c8e5fe9333f8bd6dce1a882670f69d4af192734dffd0c0a7328b2f\" returns successfully" Dec 13 01:26:55.195629 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:26:55.195735 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:26:55.212497 kubelet[2715]: E1213 01:26:55.212004 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:55.229636 kubelet[2715]: I1213 01:26:55.229590 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-46588" podStartSLOduration=1.631381629 podStartE2EDuration="14.229551034s" podCreationTimestamp="2024-12-13 01:26:41 +0000 UTC" firstStartedPulling="2024-12-13 01:26:42.264152299 +0000 UTC m=+24.299967328" lastFinishedPulling="2024-12-13 01:26:54.862321704 +0000 UTC m=+36.898136733" observedRunningTime="2024-12-13 01:26:55.22897939 +0000 UTC m=+37.264794419" watchObservedRunningTime="2024-12-13 01:26:55.229551034 +0000 UTC m=+37.265366103" Dec 13 01:26:56.213631 kubelet[2715]: E1213 01:26:56.213568 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:26:56.667645 kernel: bpftool[4098]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:26:56.810695 systemd-networkd[1220]: vxlan.calico: Link UP Dec 13 01:26:56.810705 systemd-networkd[1220]: vxlan.calico: Gained carrier Dec 13 01:26:58.044858 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:34130.service - OpenSSH per-connection server daemon (10.0.0.1:34130). Dec 13 01:26:58.083871 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 34130 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:26:58.085472 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:58.089940 systemd-logind[1510]: New session 10 of user core. Dec 13 01:26:58.094017 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:26:58.219334 sshd[4170]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:58.226937 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:34136.service - OpenSSH per-connection server daemon (10.0.0.1:34136). Dec 13 01:26:58.227359 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:34130.service: Deactivated successfully. Dec 13 01:26:58.228996 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:26:58.230504 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:26:58.232508 systemd-logind[1510]: Removed session 10. Dec 13 01:26:58.262230 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 34136 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:26:58.264984 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:58.270408 systemd-logind[1510]: New session 11 of user core. Dec 13 01:26:58.276205 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:26:58.338750 systemd-networkd[1220]: vxlan.calico: Gained IPv6LL Dec 13 01:26:58.442333 sshd[4184]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:58.449916 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:34152.service - OpenSSH per-connection server daemon (10.0.0.1:34152). Dec 13 01:26:58.450283 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:34136.service: Deactivated successfully. Dec 13 01:26:58.456899 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:26:58.458063 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:26:58.464989 systemd-logind[1510]: Removed session 11. Dec 13 01:26:58.493790 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:26:58.495213 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:58.500797 systemd-logind[1510]: New session 12 of user core. Dec 13 01:26:58.510905 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:26:58.623244 sshd[4201]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:58.626494 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:34152.service: Deactivated successfully. Dec 13 01:26:58.628423 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:26:58.628490 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:26:58.629495 systemd-logind[1510]: Removed session 12. Dec 13 01:27:02.054996 containerd[1528]: time="2024-12-13T01:27:02.054932104Z" level=info msg="StopPodSandbox for \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\"" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.129 [INFO][4245] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.130 [INFO][4245] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" iface="eth0" netns="/var/run/netns/cni-df56294b-4e60-8f90-2586-912f2a930b8a" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.132 [INFO][4245] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" iface="eth0" netns="/var/run/netns/cni-df56294b-4e60-8f90-2586-912f2a930b8a" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.135 [INFO][4245] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" iface="eth0" netns="/var/run/netns/cni-df56294b-4e60-8f90-2586-912f2a930b8a" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.135 [INFO][4245] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.135 [INFO][4245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.214 [INFO][4252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.214 [INFO][4252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.214 [INFO][4252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.227 [WARNING][4252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.227 [INFO][4252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.228 [INFO][4252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:02.232209 containerd[1528]: 2024-12-13 01:27:02.230 [INFO][4245] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:02.233585 containerd[1528]: time="2024-12-13T01:27:02.232372132Z" level=info msg="TearDown network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\" successfully" Dec 13 01:27:02.233585 containerd[1528]: time="2024-12-13T01:27:02.232402374Z" level=info msg="StopPodSandbox for \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\" returns successfully" Dec 13 01:27:02.233837 kubelet[2715]: E1213 01:27:02.232757 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:02.234128 containerd[1528]: time="2024-12-13T01:27:02.233509008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pfwc5,Uid:00ee66f4-a276-4315-8517-eae981e857e4,Namespace:kube-system,Attempt:1,}" Dec 13 01:27:02.235006 systemd[1]: run-netns-cni\x2ddf56294b\x2d4e60\x2d8f90\x2d2586\x2d912f2a930b8a.mount: Deactivated successfully. Dec 13 01:27:02.346823 systemd-networkd[1220]: calibc1c7924b93: Link UP Dec 13 01:27:02.347652 systemd-networkd[1220]: calibc1c7924b93: Gained carrier Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.280 [INFO][4261] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--pfwc5-eth0 coredns-76f75df574- kube-system 00ee66f4-a276-4315-8517-eae981e857e4 924 0 2024-12-13 01:26:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-pfwc5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc1c7924b93 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Namespace="kube-system" Pod="coredns-76f75df574-pfwc5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pfwc5-" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.280 [INFO][4261] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Namespace="kube-system" Pod="coredns-76f75df574-pfwc5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.306 [INFO][4274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" HandleID="k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.317 [INFO][4274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" HandleID="k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dafb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-pfwc5", "timestamp":"2024-12-13 01:27:02.306333758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.317 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.317 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.317 [INFO][4274] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.318 [INFO][4274] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.324 [INFO][4274] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.328 [INFO][4274] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.330 [INFO][4274] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.332 [INFO][4274] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.332 [INFO][4274] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.333 [INFO][4274] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114 Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.336 [INFO][4274] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.341 [INFO][4274] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.341 [INFO][4274] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" host="localhost" Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.341 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:02.362354 containerd[1528]: 2024-12-13 01:27:02.341 [INFO][4274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" HandleID="k8s-pod-network.5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.363113 containerd[1528]: 2024-12-13 01:27:02.344 [INFO][4261] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Namespace="kube-system" Pod="coredns-76f75df574-pfwc5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pfwc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--pfwc5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"00ee66f4-a276-4315-8517-eae981e857e4", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-pfwc5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc1c7924b93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:02.363113 containerd[1528]: 2024-12-13 01:27:02.344 [INFO][4261] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Namespace="kube-system" Pod="coredns-76f75df574-pfwc5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.363113 containerd[1528]: 2024-12-13 01:27:02.344 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc1c7924b93 ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Namespace="kube-system" Pod="coredns-76f75df574-pfwc5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.363113 containerd[1528]: 2024-12-13 01:27:02.346 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Namespace="kube-system" Pod="coredns-76f75df574-pfwc5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.363113 containerd[1528]: 2024-12-13 01:27:02.347 [INFO][4261] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Namespace="kube-system" Pod="coredns-76f75df574-pfwc5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pfwc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--pfwc5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"00ee66f4-a276-4315-8517-eae981e857e4", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114", Pod:"coredns-76f75df574-pfwc5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc1c7924b93", MAC:"56:ca:02:24:e7:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:02.363113 containerd[1528]: 2024-12-13 01:27:02.357 [INFO][4261] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114" Namespace="kube-system" Pod="coredns-76f75df574-pfwc5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:02.377756 containerd[1528]: time="2024-12-13T01:27:02.377639007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:02.377756 containerd[1528]: time="2024-12-13T01:27:02.377709172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:02.377996 containerd[1528]: time="2024-12-13T01:27:02.377732214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:02.378220 containerd[1528]: time="2024-12-13T01:27:02.378181764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:02.403488 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:02.421244 containerd[1528]: time="2024-12-13T01:27:02.421209002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pfwc5,Uid:00ee66f4-a276-4315-8517-eae981e857e4,Namespace:kube-system,Attempt:1,} returns sandbox id \"5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114\"" Dec 13 01:27:02.422038 kubelet[2715]: E1213 01:27:02.422014 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:02.425230 containerd[1528]: time="2024-12-13T01:27:02.425134504Z" level=info msg="CreateContainer within sandbox \"5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:27:02.441127 containerd[1528]: time="2024-12-13T01:27:02.441082891Z" level=info msg="CreateContainer within sandbox \"5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a37a6624cdc1903cf422395d6e27b8ea479d18f1c600c239f7d2556b80e443b8\"" Dec 13 01:27:02.442364 containerd[1528]: time="2024-12-13T01:27:02.441632127Z" level=info msg="StartContainer for \"a37a6624cdc1903cf422395d6e27b8ea479d18f1c600c239f7d2556b80e443b8\"" Dec 13 01:27:02.484146 containerd[1528]: time="2024-12-13T01:27:02.484021123Z" level=info msg="StartContainer for \"a37a6624cdc1903cf422395d6e27b8ea479d18f1c600c239f7d2556b80e443b8\" returns successfully" Dec 13 01:27:03.054505 containerd[1528]: time="2024-12-13T01:27:03.054257676Z" level=info msg="StopPodSandbox for \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\"" Dec 13 01:27:03.054745 containerd[1528]: time="2024-12-13T01:27:03.054714146Z" level=info msg="StopPodSandbox for \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\"" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" iface="eth0" netns="/var/run/netns/cni-07caf637-3f01-97ff-7fcb-3a92850c2eab" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" iface="eth0" netns="/var/run/netns/cni-07caf637-3f01-97ff-7fcb-3a92850c2eab" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" iface="eth0" netns="/var/run/netns/cni-07caf637-3f01-97ff-7fcb-3a92850c2eab" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.125 [INFO][4426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.125 [INFO][4426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.125 [INFO][4426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.134 [WARNING][4426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.134 [INFO][4426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.136 [INFO][4426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:03.139220 containerd[1528]: 2024-12-13 01:27:03.137 [INFO][4409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:03.139220 containerd[1528]: time="2024-12-13T01:27:03.139080006Z" level=info msg="TearDown network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\" successfully" Dec 13 01:27:03.139220 containerd[1528]: time="2024-12-13T01:27:03.139107048Z" level=info msg="StopPodSandbox for \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\" returns successfully" Dec 13 01:27:03.140486 containerd[1528]: time="2024-12-13T01:27:03.140456296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7d896755-dlt9b,Uid:ebfbe69b-4807-4a90-8634-a91c3bc497ca,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.102 [INFO][4410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.102 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" iface="eth0" netns="/var/run/netns/cni-3c9eb014-1202-3300-b51e-91e4039a0d82" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" iface="eth0" netns="/var/run/netns/cni-3c9eb014-1202-3300-b51e-91e4039a0d82" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" iface="eth0" netns="/var/run/netns/cni-3c9eb014-1202-3300-b51e-91e4039a0d82" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.104 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.130 [INFO][4425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.130 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.136 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.144 [WARNING][4425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.144 [INFO][4425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.145 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:03.149694 containerd[1528]: 2024-12-13 01:27:03.147 [INFO][4410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:03.150338 containerd[1528]: time="2024-12-13T01:27:03.149802830Z" level=info msg="TearDown network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\" successfully" Dec 13 01:27:03.150338 containerd[1528]: time="2024-12-13T01:27:03.149821671Z" level=info msg="StopPodSandbox for \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\" returns successfully" Dec 13 01:27:03.150338 containerd[1528]: time="2024-12-13T01:27:03.150322184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmdpd,Uid:f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7,Namespace:calico-system,Attempt:1,}" Dec 13 01:27:03.231816 kubelet[2715]: E1213 01:27:03.231017 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:03.239387 systemd[1]: run-netns-cni\x2d3c9eb014\x2d1202\x2d3300\x2db51e\x2d91e4039a0d82.mount: Deactivated successfully. Dec 13 01:27:03.239533 systemd[1]: run-netns-cni\x2d07caf637\x2d3f01\x2d97ff\x2d7fcb\x2d3a92850c2eab.mount: Deactivated successfully. Dec 13 01:27:03.248093 kubelet[2715]: I1213 01:27:03.247806 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pfwc5" podStartSLOduration=29.247736501 podStartE2EDuration="29.247736501s" podCreationTimestamp="2024-12-13 01:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:03.247042215 +0000 UTC m=+45.282857244" watchObservedRunningTime="2024-12-13 01:27:03.247736501 +0000 UTC m=+45.283551530" Dec 13 01:27:03.293859 systemd-networkd[1220]: calid58fa563bad: Link UP Dec 13 01:27:03.294387 systemd-networkd[1220]: calid58fa563bad: Gained carrier Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.197 [INFO][4440] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0 calico-apiserver-b7d896755- calico-apiserver ebfbe69b-4807-4a90-8634-a91c3bc497ca 940 0 2024-12-13 01:26:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b7d896755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b7d896755-dlt9b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid58fa563bad [] []}} ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-dlt9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--dlt9b-" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.197 [INFO][4440] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-dlt9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.229 [INFO][4467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" HandleID="k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.253 [INFO][4467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" HandleID="k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400052a530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b7d896755-dlt9b", "timestamp":"2024-12-13 01:27:03.229157721 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.253 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.253 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.254 [INFO][4467] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.256 [INFO][4467] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.262 [INFO][4467] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.269 [INFO][4467] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.273 [INFO][4467] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.276 [INFO][4467] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.276 [INFO][4467] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.277 [INFO][4467] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92 Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.280 [INFO][4467] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.285 [INFO][4467] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.285 [INFO][4467] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" host="localhost" Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.285 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:03.311155 containerd[1528]: 2024-12-13 01:27:03.285 [INFO][4467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" HandleID="k8s-pod-network.4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.313187 containerd[1528]: 2024-12-13 01:27:03.291 [INFO][4440] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-dlt9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0", GenerateName:"calico-apiserver-b7d896755-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebfbe69b-4807-4a90-8634-a91c3bc497ca", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7d896755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b7d896755-dlt9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58fa563bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:03.313187 containerd[1528]: 2024-12-13 01:27:03.292 [INFO][4440] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-dlt9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.313187 containerd[1528]: 2024-12-13 01:27:03.292 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid58fa563bad ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-dlt9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.313187 containerd[1528]: 2024-12-13 01:27:03.294 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-dlt9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.313187 containerd[1528]: 2024-12-13 01:27:03.295 [INFO][4440] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-dlt9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0", GenerateName:"calico-apiserver-b7d896755-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebfbe69b-4807-4a90-8634-a91c3bc497ca", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7d896755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92", Pod:"calico-apiserver-b7d896755-dlt9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58fa563bad", MAC:"fe:e0:fd:c0:f9:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:03.313187 containerd[1528]: 2024-12-13 01:27:03.305 [INFO][4440] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-dlt9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:03.331317 systemd-networkd[1220]: cali2410ad19c3a: Link UP Dec 13 01:27:03.332424 systemd-networkd[1220]: cali2410ad19c3a: Gained carrier Dec 13 01:27:03.339207 containerd[1528]: time="2024-12-13T01:27:03.338687113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:03.339207 containerd[1528]: time="2024-12-13T01:27:03.339139383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:03.339207 containerd[1528]: time="2024-12-13T01:27:03.339178306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:03.339425 containerd[1528]: time="2024-12-13T01:27:03.339340316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.200 [INFO][4450] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mmdpd-eth0 csi-node-driver- calico-system f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7 939 0 2024-12-13 01:26:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mmdpd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2410ad19c3a [] []}} ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Namespace="calico-system" Pod="csi-node-driver-mmdpd" WorkloadEndpoint="localhost-k8s-csi--node--driver--mmdpd-" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.201 [INFO][4450] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Namespace="calico-system" Pod="csi-node-driver-mmdpd" WorkloadEndpoint="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.233 [INFO][4472] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" HandleID="k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.257 [INFO][4472] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" HandleID="k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mmdpd", "timestamp":"2024-12-13 01:27:03.233055257 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.258 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.287 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.287 [INFO][4472] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.289 [INFO][4472] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.295 [INFO][4472] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.299 [INFO][4472] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.304 [INFO][4472] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.309 [INFO][4472] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.309 [INFO][4472] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.312 [INFO][4472] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.318 [INFO][4472] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.323 [INFO][4472] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.323 [INFO][4472] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" host="localhost" Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.324 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:03.349652 containerd[1528]: 2024-12-13 01:27:03.324 [INFO][4472] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" HandleID="k8s-pod-network.39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.350166 containerd[1528]: 2024-12-13 01:27:03.327 [INFO][4450] cni-plugin/k8s.go 386: Populated endpoint ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Namespace="calico-system" Pod="csi-node-driver-mmdpd" WorkloadEndpoint="localhost-k8s-csi--node--driver--mmdpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mmdpd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mmdpd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2410ad19c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:03.350166 containerd[1528]: 2024-12-13 01:27:03.327 [INFO][4450] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Namespace="calico-system" Pod="csi-node-driver-mmdpd" WorkloadEndpoint="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.350166 containerd[1528]: 2024-12-13 01:27:03.327 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2410ad19c3a ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Namespace="calico-system" Pod="csi-node-driver-mmdpd" WorkloadEndpoint="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.350166 containerd[1528]: 2024-12-13 01:27:03.331 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Namespace="calico-system" Pod="csi-node-driver-mmdpd" WorkloadEndpoint="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.350166 containerd[1528]: 2024-12-13 01:27:03.333 [INFO][4450] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Namespace="calico-system" Pod="csi-node-driver-mmdpd" WorkloadEndpoint="localhost-k8s-csi--node--driver--mmdpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mmdpd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c", Pod:"csi-node-driver-mmdpd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2410ad19c3a", MAC:"d2:b5:ea:f2:e0:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:03.350166 containerd[1528]: 2024-12-13 01:27:03.341 [INFO][4450] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c" Namespace="calico-system" Pod="csi-node-driver-mmdpd" WorkloadEndpoint="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:03.364238 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:03.368034 containerd[1528]: time="2024-12-13T01:27:03.367814306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:03.368034 containerd[1528]: time="2024-12-13T01:27:03.367878030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:03.368034 containerd[1528]: time="2024-12-13T01:27:03.367889791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:03.368034 containerd[1528]: time="2024-12-13T01:27:03.367980477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:03.387763 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:03.394814 systemd-networkd[1220]: calibc1c7924b93: Gained IPv6LL Dec 13 01:27:03.398650 containerd[1528]: time="2024-12-13T01:27:03.398583166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7d896755-dlt9b,Uid:ebfbe69b-4807-4a90-8634-a91c3bc497ca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92\"" Dec 13 01:27:03.400174 containerd[1528]: time="2024-12-13T01:27:03.400140709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:27:03.403693 containerd[1528]: time="2024-12-13T01:27:03.403663260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmdpd,Uid:f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7,Namespace:calico-system,Attempt:1,} returns sandbox id \"39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c\"" Dec 13 01:27:03.639879 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:56402.service - OpenSSH per-connection server daemon (10.0.0.1:56402). Dec 13 01:27:03.681756 sshd[4597]: Accepted publickey for core from 10.0.0.1 port 56402 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:03.683222 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:03.687579 systemd-logind[1510]: New session 13 of user core. Dec 13 01:27:03.693880 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:27:03.827835 sshd[4597]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:03.837017 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:56410.service - OpenSSH per-connection server daemon (10.0.0.1:56410). Dec 13 01:27:03.837496 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:56402.service: Deactivated successfully. Dec 13 01:27:03.840057 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:27:03.842036 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:27:03.843338 systemd-logind[1510]: Removed session 13. Dec 13 01:27:03.871841 sshd[4609]: Accepted publickey for core from 10.0.0.1 port 56410 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:03.873193 sshd[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:03.877500 systemd-logind[1510]: New session 14 of user core. Dec 13 01:27:03.883921 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:27:04.057253 containerd[1528]: time="2024-12-13T01:27:04.056919892Z" level=info msg="StopPodSandbox for \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\"" Dec 13 01:27:04.057569 containerd[1528]: time="2024-12-13T01:27:04.057516010Z" level=info msg="StopPodSandbox for \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\"" Dec 13 01:27:04.118343 sshd[4609]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:04.126866 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:56426.service - OpenSSH per-connection server daemon (10.0.0.1:56426). Dec 13 01:27:04.127244 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:56410.service: Deactivated successfully. Dec 13 01:27:04.132540 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:27:04.134869 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:27:04.137643 systemd-logind[1510]: Removed session 14. Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.112 [INFO][4655] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.113 [INFO][4655] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" iface="eth0" netns="/var/run/netns/cni-fb5b17b1-ccb1-6183-e8fe-96cc34f4d0ec" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.113 [INFO][4655] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" iface="eth0" netns="/var/run/netns/cni-fb5b17b1-ccb1-6183-e8fe-96cc34f4d0ec" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.115 [INFO][4655] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" iface="eth0" netns="/var/run/netns/cni-fb5b17b1-ccb1-6183-e8fe-96cc34f4d0ec" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.115 [INFO][4655] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.115 [INFO][4655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.143 [INFO][4669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.143 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.143 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.155 [WARNING][4669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.155 [INFO][4669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.157 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:04.164077 containerd[1528]: 2024-12-13 01:27:04.160 [INFO][4655] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:04.164920 containerd[1528]: time="2024-12-13T01:27:04.164878938Z" level=info msg="TearDown network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\" successfully" Dec 13 01:27:04.164967 containerd[1528]: time="2024-12-13T01:27:04.164917220Z" level=info msg="StopPodSandbox for \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\" returns successfully" Dec 13 01:27:04.165918 containerd[1528]: time="2024-12-13T01:27:04.165885763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7d896755-5fn2z,Uid:f0cd5edf-7433-4815-a273-dae6acb01eb3,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.125 [INFO][4654] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.125 [INFO][4654] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" iface="eth0" netns="/var/run/netns/cni-4e22dac7-e74f-48e4-c885-f773041921a0" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.126 [INFO][4654] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" iface="eth0" netns="/var/run/netns/cni-4e22dac7-e74f-48e4-c885-f773041921a0" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.126 [INFO][4654] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" iface="eth0" netns="/var/run/netns/cni-4e22dac7-e74f-48e4-c885-f773041921a0" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.126 [INFO][4654] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.126 [INFO][4654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.156 [INFO][4678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.156 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.157 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.166 [WARNING][4678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.166 [INFO][4678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.168 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:04.172218 containerd[1528]: 2024-12-13 01:27:04.170 [INFO][4654] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:04.172552 containerd[1528]: time="2024-12-13T01:27:04.172323178Z" level=info msg="TearDown network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\" successfully" Dec 13 01:27:04.172552 containerd[1528]: time="2024-12-13T01:27:04.172343379Z" level=info msg="StopPodSandbox for \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\" returns successfully" Dec 13 01:27:04.173149 containerd[1528]: time="2024-12-13T01:27:04.173125670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b598996-gt9q4,Uid:f2e87485-5f8e-4164-9df5-1329c4f71d1a,Namespace:calico-system,Attempt:1,}" Dec 13 01:27:04.176143 sshd[4674]: Accepted publickey for core from 10.0.0.1 port 56426 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:04.177870 sshd[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:04.184123 systemd-logind[1510]: New session 15 of user core. Dec 13 01:27:04.188892 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:27:04.238043 systemd[1]: run-netns-cni\x2dfb5b17b1\x2dccb1\x2d6183\x2de8fe\x2d96cc34f4d0ec.mount: Deactivated successfully. Dec 13 01:27:04.239024 systemd[1]: run-netns-cni\x2d4e22dac7\x2de74f\x2d48e4\x2dc885\x2df773041921a0.mount: Deactivated successfully. Dec 13 01:27:04.239757 kubelet[2715]: E1213 01:27:04.239498 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:04.318179 systemd-networkd[1220]: cali715f0c0cbc0: Link UP Dec 13 01:27:04.322157 systemd-networkd[1220]: cali715f0c0cbc0: Gained carrier Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.222 [INFO][4700] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0 calico-kube-controllers-84b598996- calico-system f2e87485-5f8e-4164-9df5-1329c4f71d1a 963 0 2024-12-13 01:26:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84b598996 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84b598996-gt9q4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali715f0c0cbc0 [] []}} ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Namespace="calico-system" Pod="calico-kube-controllers-84b598996-gt9q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.222 [INFO][4700] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Namespace="calico-system" Pod="calico-kube-controllers-84b598996-gt9q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.255 [INFO][4722] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" HandleID="k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.270 [INFO][4722] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" HandleID="k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ab080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84b598996-gt9q4", "timestamp":"2024-12-13 01:27:04.255958934 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.270 [INFO][4722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.270 [INFO][4722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.270 [INFO][4722] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.271 [INFO][4722] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.276 [INFO][4722] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.283 [INFO][4722] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.285 [INFO][4722] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.287 [INFO][4722] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.287 [INFO][4722] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.289 [INFO][4722] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68 Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.293 [INFO][4722] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.301 [INFO][4722] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.302 [INFO][4722] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" host="localhost" Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.302 [INFO][4722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:04.338482 containerd[1528]: 2024-12-13 01:27:04.302 [INFO][4722] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" HandleID="k8s-pod-network.72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.339345 containerd[1528]: 2024-12-13 01:27:04.305 [INFO][4700] cni-plugin/k8s.go 386: Populated endpoint ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Namespace="calico-system" Pod="calico-kube-controllers-84b598996-gt9q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0", GenerateName:"calico-kube-controllers-84b598996-", Namespace:"calico-system", SelfLink:"", UID:"f2e87485-5f8e-4164-9df5-1329c4f71d1a", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b598996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84b598996-gt9q4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali715f0c0cbc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:04.339345 containerd[1528]: 2024-12-13 01:27:04.305 [INFO][4700] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Namespace="calico-system" Pod="calico-kube-controllers-84b598996-gt9q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.339345 containerd[1528]: 2024-12-13 01:27:04.305 [INFO][4700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali715f0c0cbc0 ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Namespace="calico-system" Pod="calico-kube-controllers-84b598996-gt9q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.339345 containerd[1528]: 2024-12-13 01:27:04.322 [INFO][4700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Namespace="calico-system" Pod="calico-kube-controllers-84b598996-gt9q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.339345 containerd[1528]: 2024-12-13 01:27:04.323 [INFO][4700] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Namespace="calico-system" Pod="calico-kube-controllers-84b598996-gt9q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0", GenerateName:"calico-kube-controllers-84b598996-", Namespace:"calico-system", SelfLink:"", UID:"f2e87485-5f8e-4164-9df5-1329c4f71d1a", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b598996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68", Pod:"calico-kube-controllers-84b598996-gt9q4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali715f0c0cbc0", MAC:"6e:75:a9:06:bc:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:04.339345 containerd[1528]: 2024-12-13 01:27:04.333 [INFO][4700] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68" Namespace="calico-system" Pod="calico-kube-controllers-84b598996-gt9q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:04.356398 systemd-networkd[1220]: cali07225598f4a: Link UP Dec 13 01:27:04.356780 systemd-networkd[1220]: cali07225598f4a: Gained carrier Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.217 [INFO][4689] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0 calico-apiserver-b7d896755- calico-apiserver f0cd5edf-7433-4815-a273-dae6acb01eb3 962 0 2024-12-13 01:26:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b7d896755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b7d896755-5fn2z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07225598f4a [] []}} ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-5fn2z" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--5fn2z-" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.217 [INFO][4689] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-5fn2z" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.264 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" HandleID="k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.280 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" HandleID="k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b2ea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b7d896755-5fn2z", "timestamp":"2024-12-13 01:27:04.26472634 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.280 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.302 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.302 [INFO][4717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.304 [INFO][4717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.321 [INFO][4717] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.328 [INFO][4717] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.330 [INFO][4717] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.333 [INFO][4717] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.333 [INFO][4717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.337 [INFO][4717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001 Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.342 [INFO][4717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.348 [INFO][4717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.348 [INFO][4717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" host="localhost" Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.348 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:04.375238 containerd[1528]: 2024-12-13 01:27:04.348 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" HandleID="k8s-pod-network.62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.376240 containerd[1528]: 2024-12-13 01:27:04.352 [INFO][4689] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-5fn2z" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0", GenerateName:"calico-apiserver-b7d896755-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0cd5edf-7433-4815-a273-dae6acb01eb3", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7d896755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b7d896755-5fn2z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07225598f4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:04.376240 containerd[1528]: 2024-12-13 01:27:04.353 [INFO][4689] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-5fn2z" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.376240 containerd[1528]: 2024-12-13 01:27:04.353 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07225598f4a ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-5fn2z" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.376240 containerd[1528]: 2024-12-13 01:27:04.356 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-5fn2z" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.376240 containerd[1528]: 2024-12-13 01:27:04.358 [INFO][4689] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-5fn2z" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0", GenerateName:"calico-apiserver-b7d896755-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0cd5edf-7433-4815-a273-dae6acb01eb3", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7d896755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001", Pod:"calico-apiserver-b7d896755-5fn2z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07225598f4a", MAC:"92:92:45:b3:cd:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:04.376240 containerd[1528]: 2024-12-13 01:27:04.367 [INFO][4689] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001" Namespace="calico-apiserver" Pod="calico-apiserver-b7d896755-5fn2z" WorkloadEndpoint="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:04.386832 containerd[1528]: time="2024-12-13T01:27:04.386432073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:04.386832 containerd[1528]: time="2024-12-13T01:27:04.386516279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:04.386832 containerd[1528]: time="2024-12-13T01:27:04.386540720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:04.387484 containerd[1528]: time="2024-12-13T01:27:04.386748574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:04.418331 systemd-networkd[1220]: calid58fa563bad: Gained IPv6LL Dec 13 01:27:04.422933 containerd[1528]: time="2024-12-13T01:27:04.420568596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:04.422933 containerd[1528]: time="2024-12-13T01:27:04.420633480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:04.422933 containerd[1528]: time="2024-12-13T01:27:04.420657161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:04.422933 containerd[1528]: time="2024-12-13T01:27:04.420742887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:04.435286 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:04.443164 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:04.469331 containerd[1528]: time="2024-12-13T01:27:04.469224655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7d896755-5fn2z,Uid:f0cd5edf-7433-4815-a273-dae6acb01eb3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001\"" Dec 13 01:27:04.475320 containerd[1528]: time="2024-12-13T01:27:04.475276006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b598996-gt9q4,Uid:f2e87485-5f8e-4164-9df5-1329c4f71d1a,Namespace:calico-system,Attempt:1,} returns sandbox id \"72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68\"" Dec 13 01:27:04.930631 systemd-networkd[1220]: cali2410ad19c3a: Gained IPv6LL Dec 13 01:27:05.246685 kubelet[2715]: E1213 01:27:05.246579 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:05.315204 containerd[1528]: time="2024-12-13T01:27:05.315147861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:05.317292 containerd[1528]: time="2024-12-13T01:27:05.317245594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:27:05.318535 containerd[1528]: time="2024-12-13T01:27:05.318480072Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:05.325563 containerd[1528]: time="2024-12-13T01:27:05.325504278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:05.326311 containerd[1528]: time="2024-12-13T01:27:05.326268686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.926094656s" Dec 13 01:27:05.326311 containerd[1528]: time="2024-12-13T01:27:05.326302008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:27:05.328626 containerd[1528]: time="2024-12-13T01:27:05.327743500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:27:05.332401 containerd[1528]: time="2024-12-13T01:27:05.332254466Z" level=info msg="CreateContainer within sandbox \"4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:27:05.344834 containerd[1528]: time="2024-12-13T01:27:05.344724817Z" level=info msg="CreateContainer within sandbox \"4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c45357af89a57c5cebc99b609339200660788d34c185d8ff6094f629e6d9352e\"" Dec 13 01:27:05.345398 containerd[1528]: time="2024-12-13T01:27:05.345368458Z" level=info msg="StartContainer for \"c45357af89a57c5cebc99b609339200660788d34c185d8ff6094f629e6d9352e\"" Dec 13 01:27:05.389815 systemd[1]: run-containerd-runc-k8s.io-c45357af89a57c5cebc99b609339200660788d34c185d8ff6094f629e6d9352e-runc.Esy9nU.mount: Deactivated successfully. Dec 13 01:27:05.536581 containerd[1528]: time="2024-12-13T01:27:05.536375098Z" level=info msg="StartContainer for \"c45357af89a57c5cebc99b609339200660788d34c185d8ff6094f629e6d9352e\" returns successfully" Dec 13 01:27:05.864854 sshd[4674]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:05.872017 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:56430.service - OpenSSH per-connection server daemon (10.0.0.1:56430). Dec 13 01:27:05.872414 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:56426.service: Deactivated successfully. Dec 13 01:27:05.876930 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:27:05.881450 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:27:05.888188 systemd-logind[1510]: Removed session 15. Dec 13 01:27:05.925133 sshd[4904]: Accepted publickey for core from 10.0.0.1 port 56430 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:05.926515 sshd[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:05.930447 systemd-logind[1510]: New session 16 of user core. Dec 13 01:27:05.939979 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:27:06.017813 systemd-networkd[1220]: cali07225598f4a: Gained IPv6LL Dec 13 01:27:06.054324 containerd[1528]: time="2024-12-13T01:27:06.054270987Z" level=info msg="StopPodSandbox for \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\"" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.113 [INFO][4935] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.114 [INFO][4935] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" iface="eth0" netns="/var/run/netns/cni-646f24a8-96d6-0526-c59c-b9b79023d151" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.114 [INFO][4935] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" iface="eth0" netns="/var/run/netns/cni-646f24a8-96d6-0526-c59c-b9b79023d151" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.115 [INFO][4935] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" iface="eth0" netns="/var/run/netns/cni-646f24a8-96d6-0526-c59c-b9b79023d151" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.115 [INFO][4935] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.115 [INFO][4935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.142 [INFO][4942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.142 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.143 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.151 [WARNING][4942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.151 [INFO][4942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.154 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:06.161131 containerd[1528]: 2024-12-13 01:27:06.157 [INFO][4935] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:06.161131 containerd[1528]: time="2024-12-13T01:27:06.159538681Z" level=info msg="TearDown network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\" successfully" Dec 13 01:27:06.161131 containerd[1528]: time="2024-12-13T01:27:06.159566243Z" level=info msg="StopPodSandbox for \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\" returns successfully" Dec 13 01:27:06.161913 kubelet[2715]: E1213 01:27:06.159944 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:06.162024 containerd[1528]: time="2024-12-13T01:27:06.161947911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l5x25,Uid:931767d7-5830-4f3b-991c-c63e121572c9,Namespace:kube-system,Attempt:1,}" Dec 13 01:27:06.269412 kubelet[2715]: I1213 01:27:06.267642 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b7d896755-dlt9b" podStartSLOduration=24.340578874 podStartE2EDuration="26.267570428s" podCreationTimestamp="2024-12-13 01:26:40 +0000 UTC" firstStartedPulling="2024-12-13 01:27:03.399894572 +0000 UTC m=+45.435709601" lastFinishedPulling="2024-12-13 01:27:05.326886126 +0000 UTC m=+47.362701155" observedRunningTime="2024-12-13 01:27:06.265322927 +0000 UTC m=+48.301137996" watchObservedRunningTime="2024-12-13 01:27:06.267570428 +0000 UTC m=+48.303385457" Dec 13 01:27:06.273966 systemd-networkd[1220]: cali715f0c0cbc0: Gained IPv6LL Dec 13 01:27:06.300739 sshd[4904]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:06.311487 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:56438.service - OpenSSH per-connection server daemon (10.0.0.1:56438). Dec 13 01:27:06.311906 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:56430.service: Deactivated successfully. Dec 13 01:27:06.317405 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:27:06.321216 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:27:06.322501 systemd-logind[1510]: Removed session 16. Dec 13 01:27:06.338819 systemd-networkd[1220]: calif359576c1da: Link UP Dec 13 01:27:06.340628 systemd-networkd[1220]: calif359576c1da: Gained carrier Dec 13 01:27:06.350444 systemd[1]: run-netns-cni\x2d646f24a8\x2d96d6\x2d0526\x2dc59c\x2db9b79023d151.mount: Deactivated successfully. Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.219 [INFO][4951] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--l5x25-eth0 coredns-76f75df574- kube-system 931767d7-5830-4f3b-991c-c63e121572c9 1001 0 2024-12-13 01:26:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-l5x25 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif359576c1da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Namespace="kube-system" Pod="coredns-76f75df574-l5x25" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l5x25-" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.219 [INFO][4951] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Namespace="kube-system" Pod="coredns-76f75df574-l5x25" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.256 [INFO][4963] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" HandleID="k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.278 [INFO][4963] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" HandleID="k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c410), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-l5x25", "timestamp":"2024-12-13 01:27:06.256461254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.278 [INFO][4963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.278 [INFO][4963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.278 [INFO][4963] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.284 [INFO][4963] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.291 [INFO][4963] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.296 [INFO][4963] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.301 [INFO][4963] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.307 [INFO][4963] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.307 [INFO][4963] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.309 [INFO][4963] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56 Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.315 [INFO][4963] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.326 [INFO][4963] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.328 [INFO][4963] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" host="localhost" Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.328 [INFO][4963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:06.357724 containerd[1528]: 2024-12-13 01:27:06.328 [INFO][4963] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" HandleID="k8s-pod-network.ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.358446 containerd[1528]: 2024-12-13 01:27:06.331 [INFO][4951] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Namespace="kube-system" Pod="coredns-76f75df574-l5x25" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l5x25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l5x25-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"931767d7-5830-4f3b-991c-c63e121572c9", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-l5x25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif359576c1da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:06.358446 containerd[1528]: 2024-12-13 01:27:06.332 [INFO][4951] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Namespace="kube-system" Pod="coredns-76f75df574-l5x25" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.358446 containerd[1528]: 2024-12-13 01:27:06.332 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif359576c1da ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Namespace="kube-system" Pod="coredns-76f75df574-l5x25" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.358446 containerd[1528]: 2024-12-13 01:27:06.333 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Namespace="kube-system" Pod="coredns-76f75df574-l5x25" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.358446 containerd[1528]: 2024-12-13 01:27:06.334 [INFO][4951] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Namespace="kube-system" Pod="coredns-76f75df574-l5x25" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l5x25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l5x25-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"931767d7-5830-4f3b-991c-c63e121572c9", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56", Pod:"coredns-76f75df574-l5x25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif359576c1da", MAC:"a2:fc:dd:2a:a6:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:06.358446 containerd[1528]: 2024-12-13 01:27:06.349 [INFO][4951] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56" Namespace="kube-system" Pod="coredns-76f75df574-l5x25" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:06.381341 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 56438 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:06.382129 sshd[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:06.388858 systemd-logind[1510]: New session 17 of user core. Dec 13 01:27:06.389643 containerd[1528]: time="2024-12-13T01:27:06.389391675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:06.389643 containerd[1528]: time="2024-12-13T01:27:06.389462840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:06.389643 containerd[1528]: time="2024-12-13T01:27:06.389482961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:06.389853 containerd[1528]: time="2024-12-13T01:27:06.389620010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:06.395172 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:27:06.412062 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:06.440741 containerd[1528]: time="2024-12-13T01:27:06.440650917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l5x25,Uid:931767d7-5830-4f3b-991c-c63e121572c9,Namespace:kube-system,Attempt:1,} returns sandbox id \"ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56\"" Dec 13 01:27:06.442165 kubelet[2715]: E1213 01:27:06.442144 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:06.445219 containerd[1528]: time="2024-12-13T01:27:06.444369149Z" level=info msg="CreateContainer within sandbox \"ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:27:06.459869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289920666.mount: Deactivated successfully. Dec 13 01:27:06.465754 containerd[1528]: time="2024-12-13T01:27:06.465718122Z" level=info msg="CreateContainer within sandbox \"ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8deb6d8c5784836fe3ffefdea095dc5d4638dfa612b2c2acaf148aa3daa38f59\"" Dec 13 01:27:06.467342 containerd[1528]: time="2024-12-13T01:27:06.467289300Z" level=info msg="StartContainer for \"8deb6d8c5784836fe3ffefdea095dc5d4638dfa612b2c2acaf148aa3daa38f59\"" Dec 13 01:27:06.532405 containerd[1528]: time="2024-12-13T01:27:06.532235316Z" level=info msg="StartContainer for \"8deb6d8c5784836fe3ffefdea095dc5d4638dfa612b2c2acaf148aa3daa38f59\" returns successfully" Dec 13 01:27:06.601447 sshd[4971]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:06.605115 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:27:06.606886 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:56438.service: Deactivated successfully. Dec 13 01:27:06.608513 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:27:06.611096 systemd-logind[1510]: Removed session 17. Dec 13 01:27:06.683690 containerd[1528]: time="2024-12-13T01:27:06.683283949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.685624 containerd[1528]: time="2024-12-13T01:27:06.684126842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:27:06.685624 containerd[1528]: time="2024-12-13T01:27:06.684925972Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.691279 containerd[1528]: time="2024-12-13T01:27:06.691244606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.691923 containerd[1528]: time="2024-12-13T01:27:06.691893327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.364110384s" Dec 13 01:27:06.691964 containerd[1528]: time="2024-12-13T01:27:06.691929169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:27:06.694386 containerd[1528]: time="2024-12-13T01:27:06.694360561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:27:06.695796 containerd[1528]: time="2024-12-13T01:27:06.695763689Z" level=info msg="CreateContainer within sandbox \"39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:27:06.706757 containerd[1528]: time="2024-12-13T01:27:06.706716533Z" level=info msg="CreateContainer within sandbox \"39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5222a5637f6ceb4d5b256a3f1c24d793e9740cd5f13252ed768fec52c49d4eab\"" Dec 13 01:27:06.707624 containerd[1528]: time="2024-12-13T01:27:06.707090476Z" level=info msg="StartContainer for \"5222a5637f6ceb4d5b256a3f1c24d793e9740cd5f13252ed768fec52c49d4eab\"" Dec 13 01:27:06.760139 containerd[1528]: time="2024-12-13T01:27:06.760098066Z" level=info msg="StartContainer for \"5222a5637f6ceb4d5b256a3f1c24d793e9740cd5f13252ed768fec52c49d4eab\" returns successfully" Dec 13 01:27:06.941593 containerd[1528]: time="2024-12-13T01:27:06.941455192Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.945667 containerd[1528]: time="2024-12-13T01:27:06.945586490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:27:06.947195 containerd[1528]: time="2024-12-13T01:27:06.947142507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 252.746144ms" Dec 13 01:27:06.947195 containerd[1528]: time="2024-12-13T01:27:06.947180670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:27:06.947792 containerd[1528]: time="2024-12-13T01:27:06.947754186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:27:06.954302 containerd[1528]: time="2024-12-13T01:27:06.954258152Z" level=info msg="CreateContainer within sandbox \"62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:27:06.969200 containerd[1528]: time="2024-12-13T01:27:06.969138161Z" level=info msg="CreateContainer within sandbox \"62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a75540f4ade12e6f69f8f31a2595004dff3dab3e7bc15fa5e47f61fd8aac930\"" Dec 13 01:27:06.970353 containerd[1528]: time="2024-12-13T01:27:06.970125743Z" level=info msg="StartContainer for \"1a75540f4ade12e6f69f8f31a2595004dff3dab3e7bc15fa5e47f61fd8aac930\"" Dec 13 01:27:07.064737 containerd[1528]: time="2024-12-13T01:27:07.064678587Z" level=info msg="StartContainer for \"1a75540f4ade12e6f69f8f31a2595004dff3dab3e7bc15fa5e47f61fd8aac930\" returns successfully" Dec 13 01:27:07.273992 kubelet[2715]: I1213 01:27:07.273950 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:07.275038 kubelet[2715]: E1213 01:27:07.274813 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:07.288766 kubelet[2715]: I1213 01:27:07.288391 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l5x25" podStartSLOduration=33.288320144 podStartE2EDuration="33.288320144s" podCreationTimestamp="2024-12-13 01:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:07.287296561 +0000 UTC m=+49.323111590" watchObservedRunningTime="2024-12-13 01:27:07.288320144 +0000 UTC m=+49.324135173" Dec 13 01:27:07.297454 kubelet[2715]: I1213 01:27:07.297404 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b7d896755-5fn2z" podStartSLOduration=24.820249421 podStartE2EDuration="27.29736486s" podCreationTimestamp="2024-12-13 01:26:40 +0000 UTC" firstStartedPulling="2024-12-13 01:27:04.470394291 +0000 UTC m=+46.506209320" lastFinishedPulling="2024-12-13 01:27:06.94750973 +0000 UTC m=+48.983324759" observedRunningTime="2024-12-13 01:27:07.29720829 +0000 UTC m=+49.333023319" watchObservedRunningTime="2024-12-13 01:27:07.29736486 +0000 UTC m=+49.333179889" Dec 13 01:27:07.681798 systemd-networkd[1220]: calif359576c1da: Gained IPv6LL Dec 13 01:27:08.277157 kubelet[2715]: I1213 01:27:08.277108 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:08.279830 kubelet[2715]: E1213 01:27:08.279654 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:08.529106 containerd[1528]: time="2024-12-13T01:27:08.528950749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:08.530132 containerd[1528]: time="2024-12-13T01:27:08.529989532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:27:08.531123 containerd[1528]: time="2024-12-13T01:27:08.531074718Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:08.532965 containerd[1528]: time="2024-12-13T01:27:08.532903149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:08.534027 containerd[1528]: time="2024-12-13T01:27:08.533852086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.585903169s" Dec 13 01:27:08.534027 containerd[1528]: time="2024-12-13T01:27:08.533889649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:27:08.535239 containerd[1528]: time="2024-12-13T01:27:08.535088281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:27:08.543573 containerd[1528]: time="2024-12-13T01:27:08.542822310Z" level=info msg="CreateContainer within sandbox \"72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:27:08.554863 containerd[1528]: time="2024-12-13T01:27:08.554815997Z" level=info msg="CreateContainer within sandbox \"72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"01fc305c3860c9d94f717c12f12ed3ffad7613fac61d56e435130918467f73c8\"" Dec 13 01:27:08.555445 containerd[1528]: time="2024-12-13T01:27:08.555420474Z" level=info msg="StartContainer for \"01fc305c3860c9d94f717c12f12ed3ffad7613fac61d56e435130918467f73c8\"" Dec 13 01:27:08.613978 containerd[1528]: time="2024-12-13T01:27:08.611923820Z" level=info msg="StartContainer for \"01fc305c3860c9d94f717c12f12ed3ffad7613fac61d56e435130918467f73c8\" returns successfully" Dec 13 01:27:09.283685 kubelet[2715]: E1213 01:27:09.283657 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:09.296445 kubelet[2715]: I1213 01:27:09.296397 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84b598996-gt9q4" podStartSLOduration=23.238230942 podStartE2EDuration="27.296318309s" podCreationTimestamp="2024-12-13 01:26:42 +0000 UTC" firstStartedPulling="2024-12-13 01:27:04.476286311 +0000 UTC m=+46.512101340" lastFinishedPulling="2024-12-13 01:27:08.534373718 +0000 UTC m=+50.570188707" observedRunningTime="2024-12-13 01:27:09.294084896 +0000 UTC m=+51.329899965" watchObservedRunningTime="2024-12-13 01:27:09.296318309 +0000 UTC m=+51.332133338" Dec 13 01:27:09.859626 containerd[1528]: time="2024-12-13T01:27:09.859559312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.860983 containerd[1528]: time="2024-12-13T01:27:09.860935274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:27:09.861885 containerd[1528]: time="2024-12-13T01:27:09.861850889Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.863682 containerd[1528]: time="2024-12-13T01:27:09.863626155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.864399 containerd[1528]: time="2024-12-13T01:27:09.864366120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.329240996s" Dec 13 01:27:09.864399 containerd[1528]: time="2024-12-13T01:27:09.864400722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:27:09.866414 containerd[1528]: time="2024-12-13T01:27:09.866377280Z" level=info msg="CreateContainer within sandbox \"39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:27:09.888166 containerd[1528]: time="2024-12-13T01:27:09.888095739Z" level=info msg="CreateContainer within sandbox \"39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4c9e8169698d4214db3f1ca78d0228b1edfa29624b9733edd774694386a8952b\"" Dec 13 01:27:09.889458 containerd[1528]: time="2024-12-13T01:27:09.889424298Z" level=info msg="StartContainer for \"4c9e8169698d4214db3f1ca78d0228b1edfa29624b9733edd774694386a8952b\"" Dec 13 01:27:09.915979 systemd[1]: run-containerd-runc-k8s.io-4c9e8169698d4214db3f1ca78d0228b1edfa29624b9733edd774694386a8952b-runc.I1KFw5.mount: Deactivated successfully. Dec 13 01:27:09.939469 containerd[1528]: time="2024-12-13T01:27:09.939377806Z" level=info msg="StartContainer for \"4c9e8169698d4214db3f1ca78d0228b1edfa29624b9733edd774694386a8952b\" returns successfully" Dec 13 01:27:10.154665 kubelet[2715]: I1213 01:27:10.153937 2715 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:27:10.155557 kubelet[2715]: I1213 01:27:10.155520 2715 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:27:10.295620 kubelet[2715]: I1213 01:27:10.295588 2715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-mmdpd" podStartSLOduration=21.835268928 podStartE2EDuration="28.295539957s" podCreationTimestamp="2024-12-13 01:26:42 +0000 UTC" firstStartedPulling="2024-12-13 01:27:03.404552398 +0000 UTC m=+45.440367427" lastFinishedPulling="2024-12-13 01:27:09.864823427 +0000 UTC m=+51.900638456" observedRunningTime="2024-12-13 01:27:10.29490436 +0000 UTC m=+52.330719389" watchObservedRunningTime="2024-12-13 01:27:10.295539957 +0000 UTC m=+52.331354986" Dec 13 01:27:11.613882 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:56448.service - OpenSSH per-connection server daemon (10.0.0.1:56448). Dec 13 01:27:11.654902 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 56448 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:11.656322 sshd[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:11.662269 systemd-logind[1510]: New session 18 of user core. Dec 13 01:27:11.668917 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:27:11.856123 sshd[5281]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:11.859268 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:56448.service: Deactivated successfully. Dec 13 01:27:11.861384 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:27:11.861417 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:27:11.864210 systemd-logind[1510]: Removed session 18. Dec 13 01:27:12.361687 kubelet[2715]: E1213 01:27:12.361654 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:16.866853 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:36056.service - OpenSSH per-connection server daemon (10.0.0.1:36056). Dec 13 01:27:16.909299 sshd[5320]: Accepted publickey for core from 10.0.0.1 port 36056 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:16.910879 sshd[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:16.914675 systemd-logind[1510]: New session 19 of user core. Dec 13 01:27:16.926894 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:27:17.069369 sshd[5320]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:17.073898 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:36056.service: Deactivated successfully. Dec 13 01:27:17.075781 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:27:17.075856 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:27:17.077153 systemd-logind[1510]: Removed session 19. Dec 13 01:27:18.025413 containerd[1528]: time="2024-12-13T01:27:18.025377448Z" level=info msg="StopPodSandbox for \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\"" Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.060 [WARNING][5356] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0", GenerateName:"calico-apiserver-b7d896755-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0cd5edf-7433-4815-a273-dae6acb01eb3", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7d896755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001", Pod:"calico-apiserver-b7d896755-5fn2z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07225598f4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.060 [INFO][5356] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.060 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" iface="eth0" netns="" Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.060 [INFO][5356] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.060 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.081 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.081 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.081 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.089 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.089 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.090 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.094570 containerd[1528]: 2024-12-13 01:27:18.092 [INFO][5356] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:18.094570 containerd[1528]: time="2024-12-13T01:27:18.094440441Z" level=info msg="TearDown network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\" successfully" Dec 13 01:27:18.094570 containerd[1528]: time="2024-12-13T01:27:18.094465002Z" level=info msg="StopPodSandbox for \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\" returns successfully" Dec 13 01:27:18.095036 containerd[1528]: time="2024-12-13T01:27:18.094998911Z" level=info msg="RemovePodSandbox for \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\"" Dec 13 01:27:18.108494 containerd[1528]: time="2024-12-13T01:27:18.108437681Z" level=info msg="Forcibly stopping sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\"" Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.143 [WARNING][5388] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0", GenerateName:"calico-apiserver-b7d896755-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0cd5edf-7433-4815-a273-dae6acb01eb3", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7d896755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62a9b9b6c817f2be579c9f528ac91fcba653cf07aea92bc1248a14fd47875001", Pod:"calico-apiserver-b7d896755-5fn2z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07225598f4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.144 [INFO][5388] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.144 [INFO][5388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" iface="eth0" netns="" Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.144 [INFO][5388] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.144 [INFO][5388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.164 [INFO][5395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.164 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.164 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.171 [WARNING][5395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.171 [INFO][5395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" HandleID="k8s-pod-network.5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Workload="localhost-k8s-calico--apiserver--b7d896755--5fn2z-eth0" Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.173 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.176679 containerd[1528]: 2024-12-13 01:27:18.174 [INFO][5388] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36" Dec 13 01:27:18.177097 containerd[1528]: time="2024-12-13T01:27:18.176720192Z" level=info msg="TearDown network for sandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\" successfully" Dec 13 01:27:18.188051 containerd[1528]: time="2024-12-13T01:27:18.187898599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:27:18.188051 containerd[1528]: time="2024-12-13T01:27:18.187982804Z" level=info msg="RemovePodSandbox \"5cba27d8ad78d9dfa7a3f814db920f0e37e2e358586b98f8235c0c959aa27d36\" returns successfully" Dec 13 01:27:18.188799 containerd[1528]: time="2024-12-13T01:27:18.188507432Z" level=info msg="StopPodSandbox for \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\"" Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.223 [WARNING][5418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0", GenerateName:"calico-apiserver-b7d896755-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebfbe69b-4807-4a90-8634-a91c3bc497ca", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7d896755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92", Pod:"calico-apiserver-b7d896755-dlt9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58fa563bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.223 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.223 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" iface="eth0" netns="" Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.223 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.223 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.243 [INFO][5426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.243 [INFO][5426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.243 [INFO][5426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.251 [WARNING][5426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.251 [INFO][5426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.253 [INFO][5426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.256649 containerd[1528]: 2024-12-13 01:27:18.254 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:18.257041 containerd[1528]: time="2024-12-13T01:27:18.256690897Z" level=info msg="TearDown network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\" successfully" Dec 13 01:27:18.257041 containerd[1528]: time="2024-12-13T01:27:18.256715139Z" level=info msg="StopPodSandbox for \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\" returns successfully" Dec 13 01:27:18.257579 containerd[1528]: time="2024-12-13T01:27:18.257285370Z" level=info msg="RemovePodSandbox for \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\"" Dec 13 01:27:18.257579 containerd[1528]: time="2024-12-13T01:27:18.257320772Z" level=info msg="Forcibly stopping sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\"" Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.307 [WARNING][5448] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0", GenerateName:"calico-apiserver-b7d896755-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebfbe69b-4807-4a90-8634-a91c3bc497ca", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7d896755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a9b3b57fb0202842cd5ebcccf29c3e28ed35425939549d6fe6949e3bcb67e92", Pod:"calico-apiserver-b7d896755-dlt9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58fa563bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.307 [INFO][5448] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.307 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" iface="eth0" netns="" Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.307 [INFO][5448] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.307 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.328 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.328 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.328 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.335 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.335 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" HandleID="k8s-pod-network.af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Workload="localhost-k8s-calico--apiserver--b7d896755--dlt9b-eth0" Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.337 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.340485 containerd[1528]: 2024-12-13 01:27:18.338 [INFO][5448] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750" Dec 13 01:27:18.340485 containerd[1528]: time="2024-12-13T01:27:18.340453849Z" level=info msg="TearDown network for sandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\" successfully" Dec 13 01:27:18.343088 containerd[1528]: time="2024-12-13T01:27:18.343057911Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:27:18.343149 containerd[1528]: time="2024-12-13T01:27:18.343118834Z" level=info msg="RemovePodSandbox \"af1b3e13272174c75b138bb491fa40bb3981ebca13b40bca3a72da90dcbbd750\" returns successfully" Dec 13 01:27:18.343633 containerd[1528]: time="2024-12-13T01:27:18.343547177Z" level=info msg="StopPodSandbox for \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\"" Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.377 [WARNING][5478] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mmdpd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c", Pod:"csi-node-driver-mmdpd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2410ad19c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.378 [INFO][5478] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.378 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" iface="eth0" netns="" Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.378 [INFO][5478] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.378 [INFO][5478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.398 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.398 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.398 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.406 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.406 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.407 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.411010 containerd[1528]: 2024-12-13 01:27:18.409 [INFO][5478] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:18.411415 containerd[1528]: time="2024-12-13T01:27:18.411049445Z" level=info msg="TearDown network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\" successfully" Dec 13 01:27:18.411415 containerd[1528]: time="2024-12-13T01:27:18.411075007Z" level=info msg="StopPodSandbox for \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\" returns successfully" Dec 13 01:27:18.411922 containerd[1528]: time="2024-12-13T01:27:18.411600675Z" level=info msg="RemovePodSandbox for \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\"" Dec 13 01:27:18.411922 containerd[1528]: time="2024-12-13T01:27:18.411654278Z" level=info msg="Forcibly stopping sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\"" Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.445 [WARNING][5507] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mmdpd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1e8aa9b-f9a7-4786-81cc-8faa5931a2c7", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39caa24729eb787c764b1b4c9e50d5ee8bfb2c36882cd6669410aebeb8590c5c", Pod:"csi-node-driver-mmdpd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2410ad19c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.445 [INFO][5507] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.445 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" iface="eth0" netns="" Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.445 [INFO][5507] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.445 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.463 [INFO][5515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.463 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.463 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.471 [WARNING][5515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.471 [INFO][5515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" HandleID="k8s-pod-network.d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Workload="localhost-k8s-csi--node--driver--mmdpd-eth0" Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.472 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.476253 containerd[1528]: 2024-12-13 01:27:18.474 [INFO][5507] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1" Dec 13 01:27:18.477141 containerd[1528]: time="2024-12-13T01:27:18.476712853Z" level=info msg="TearDown network for sandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\" successfully" Dec 13 01:27:18.479533 containerd[1528]: time="2024-12-13T01:27:18.479501845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:27:18.479671 containerd[1528]: time="2024-12-13T01:27:18.479648413Z" level=info msg="RemovePodSandbox \"d5c865eef5abf820b54edd80942497b241a2ccc708c307fea4f92854f55789e1\" returns successfully" Dec 13 01:27:18.480276 containerd[1528]: time="2024-12-13T01:27:18.480233285Z" level=info msg="StopPodSandbox for \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\"" Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.514 [WARNING][5537] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0", GenerateName:"calico-kube-controllers-84b598996-", Namespace:"calico-system", SelfLink:"", UID:"f2e87485-5f8e-4164-9df5-1329c4f71d1a", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b598996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68", Pod:"calico-kube-controllers-84b598996-gt9q4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali715f0c0cbc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.514 [INFO][5537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.514 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" iface="eth0" netns="" Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.514 [INFO][5537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.514 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.532 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.532 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.532 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.540 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.540 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.541 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.545443 containerd[1528]: 2024-12-13 01:27:18.543 [INFO][5537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:18.545860 containerd[1528]: time="2024-12-13T01:27:18.545472350Z" level=info msg="TearDown network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\" successfully" Dec 13 01:27:18.545860 containerd[1528]: time="2024-12-13T01:27:18.545509752Z" level=info msg="StopPodSandbox for \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\" returns successfully" Dec 13 01:27:18.546338 containerd[1528]: time="2024-12-13T01:27:18.546048461Z" level=info msg="RemovePodSandbox for \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\"" Dec 13 01:27:18.546338 containerd[1528]: time="2024-12-13T01:27:18.546081543Z" level=info msg="Forcibly stopping sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\"" Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.580 [WARNING][5568] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0", GenerateName:"calico-kube-controllers-84b598996-", Namespace:"calico-system", SelfLink:"", UID:"f2e87485-5f8e-4164-9df5-1329c4f71d1a", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b598996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72a5e150efc6611b9da198cdd1c5c0f4f0a2c1f07d9d2351d8fb466ea8dabd68", Pod:"calico-kube-controllers-84b598996-gt9q4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali715f0c0cbc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.581 [INFO][5568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.581 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" iface="eth0" netns="" Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.581 [INFO][5568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.581 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.599 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.599 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.599 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.607 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.607 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" HandleID="k8s-pod-network.55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Workload="localhost-k8s-calico--kube--controllers--84b598996--gt9q4-eth0" Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.608 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.612448 containerd[1528]: 2024-12-13 01:27:18.610 [INFO][5568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037" Dec 13 01:27:18.612843 containerd[1528]: time="2024-12-13T01:27:18.612441629Z" level=info msg="TearDown network for sandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\" successfully" Dec 13 01:27:18.615799 containerd[1528]: time="2024-12-13T01:27:18.615760609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:27:18.615842 containerd[1528]: time="2024-12-13T01:27:18.615824453Z" level=info msg="RemovePodSandbox \"55bd0d64065eb3ffbbc5417c3405e781a368ce3e7beb30f5da484159c4e1f037\" returns successfully" Dec 13 01:27:18.616357 containerd[1528]: time="2024-12-13T01:27:18.616337761Z" level=info msg="StopPodSandbox for \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\"" Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.656 [WARNING][5598] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l5x25-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"931767d7-5830-4f3b-991c-c63e121572c9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56", Pod:"coredns-76f75df574-l5x25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif359576c1da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.656 [INFO][5598] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.656 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" iface="eth0" netns="" Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.656 [INFO][5598] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.656 [INFO][5598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.674 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.674 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.674 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.682 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.682 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.683 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.687251 containerd[1528]: 2024-12-13 01:27:18.685 [INFO][5598] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:18.687676 containerd[1528]: time="2024-12-13T01:27:18.687288976Z" level=info msg="TearDown network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\" successfully" Dec 13 01:27:18.687676 containerd[1528]: time="2024-12-13T01:27:18.687314337Z" level=info msg="StopPodSandbox for \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\" returns successfully" Dec 13 01:27:18.687831 containerd[1528]: time="2024-12-13T01:27:18.687793403Z" level=info msg="RemovePodSandbox for \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\"" Dec 13 01:27:18.687831 containerd[1528]: time="2024-12-13T01:27:18.687828245Z" level=info msg="Forcibly stopping sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\"" Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.720 [WARNING][5629] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l5x25-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"931767d7-5830-4f3b-991c-c63e121572c9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca79f31bbab5cbd85f48b627ebb6374b7521c5187d9e7d1c84ebdab6c6816c56", Pod:"coredns-76f75df574-l5x25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif359576c1da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.721 [INFO][5629] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.721 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" iface="eth0" netns="" Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.721 [INFO][5629] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.721 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.740 [INFO][5637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.740 [INFO][5637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.740 [INFO][5637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.747 [WARNING][5637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.747 [INFO][5637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" HandleID="k8s-pod-network.7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Workload="localhost-k8s-coredns--76f75df574--l5x25-eth0" Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.748 [INFO][5637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.752395 containerd[1528]: 2024-12-13 01:27:18.750 [INFO][5629] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2" Dec 13 01:27:18.752395 containerd[1528]: time="2024-12-13T01:27:18.752366032Z" level=info msg="TearDown network for sandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\" successfully" Dec 13 01:27:18.754899 containerd[1528]: time="2024-12-13T01:27:18.754869048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:27:18.754955 containerd[1528]: time="2024-12-13T01:27:18.754922211Z" level=info msg="RemovePodSandbox \"7c991000aab5483990110654a983a36bc6b436881b5dd8ec7e317d4129a4a4c2\" returns successfully" Dec 13 01:27:18.755629 containerd[1528]: time="2024-12-13T01:27:18.755406678Z" level=info msg="StopPodSandbox for \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\"" Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.788 [WARNING][5659] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--pfwc5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"00ee66f4-a276-4315-8517-eae981e857e4", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114", Pod:"coredns-76f75df574-pfwc5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc1c7924b93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.789 [INFO][5659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.789 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" iface="eth0" netns="" Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.789 [INFO][5659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.789 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.807 [INFO][5666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.807 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.807 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.814 [WARNING][5666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.814 [INFO][5666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.816 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.819387 containerd[1528]: 2024-12-13 01:27:18.817 [INFO][5659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:18.820005 containerd[1528]: time="2024-12-13T01:27:18.819423836Z" level=info msg="TearDown network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\" successfully" Dec 13 01:27:18.820005 containerd[1528]: time="2024-12-13T01:27:18.819447998Z" level=info msg="StopPodSandbox for \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\" returns successfully" Dec 13 01:27:18.820005 containerd[1528]: time="2024-12-13T01:27:18.819901262Z" level=info msg="RemovePodSandbox for \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\"" Dec 13 01:27:18.820005 containerd[1528]: time="2024-12-13T01:27:18.819927584Z" level=info msg="Forcibly stopping sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\"" Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.852 [WARNING][5688] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--pfwc5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"00ee66f4-a276-4315-8517-eae981e857e4", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5aade9f454a3491352bfd74fbffb03c6639c581e4a35bed23361f11212385114", Pod:"coredns-76f75df574-pfwc5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc1c7924b93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.852 [INFO][5688] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.852 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" iface="eth0" netns="" Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.852 [INFO][5688] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.852 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.872 [INFO][5695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.872 [INFO][5695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.872 [INFO][5695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.880 [WARNING][5695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.880 [INFO][5695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" HandleID="k8s-pod-network.2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Workload="localhost-k8s-coredns--76f75df574--pfwc5-eth0" Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.882 [INFO][5695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:18.885260 containerd[1528]: 2024-12-13 01:27:18.883 [INFO][5688] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b" Dec 13 01:27:18.885260 containerd[1528]: time="2024-12-13T01:27:18.885239373Z" level=info msg="TearDown network for sandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\" successfully" Dec 13 01:27:18.888094 containerd[1528]: time="2024-12-13T01:27:18.888051565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:27:18.888165 containerd[1528]: time="2024-12-13T01:27:18.888116969Z" level=info msg="RemovePodSandbox \"2ffd9e2fc34cfd651fe8595556686f82bf711fd5575fd65d1404da8efd63709b\" returns successfully" Dec 13 01:27:20.616237 systemd[1]: run-containerd-runc-k8s.io-01fc305c3860c9d94f717c12f12ed3ffad7613fac61d56e435130918467f73c8-runc.PtDWFK.mount: Deactivated successfully. Dec 13 01:27:22.084848 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:36066.service - OpenSSH per-connection server daemon (10.0.0.1:36066). Dec 13 01:27:22.131472 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 36066 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:22.135075 sshd[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:22.139426 systemd-logind[1510]: New session 20 of user core. Dec 13 01:27:22.147874 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:27:22.312854 sshd[5743]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:22.315893 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:36066.service: Deactivated successfully. Dec 13 01:27:22.319958 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:27:22.320654 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:27:22.324051 systemd-logind[1510]: Removed session 20. Dec 13 01:27:23.475134 kubelet[2715]: I1213 01:27:23.475090 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:23.814587 kubelet[2715]: I1213 01:27:23.814501 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:27.329835 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:34894.service - OpenSSH per-connection server daemon (10.0.0.1:34894). Dec 13 01:27:27.363990 sshd[5766]: Accepted publickey for core from 10.0.0.1 port 34894 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:27:27.365168 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:27.368729 systemd-logind[1510]: New session 21 of user core. Dec 13 01:27:27.378078 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:27:27.518901 sshd[5766]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:27.522192 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:34894.service: Deactivated successfully. Dec 13 01:27:27.524604 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:27:27.524746 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:27:27.528160 systemd-logind[1510]: Removed session 21.