Dec 13 01:47:19.954520 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:47:19.954540 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:47:19.954549 kernel: KASLR enabled Dec 13 01:47:19.954555 kernel: efi: EFI v2.7 by EDK II Dec 13 01:47:19.954561 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:47:19.954566 kernel: random: crng init done Dec 13 01:47:19.954645 kernel: ACPI: Early table checksum verification disabled Dec 13 01:47:19.954652 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:47:19.954659 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:47:19.954668 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954675 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954680 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954686 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954692 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954700 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954707 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954714 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954720 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:47:19.954726 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:47:19.954732 kernel: NUMA: Failed to initialise from firmware Dec 13 01:47:19.954739 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:47:19.954745 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 01:47:19.954751 kernel: Zone ranges: Dec 13 01:47:19.954757 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:47:19.954764 kernel: DMA32 empty Dec 13 01:47:19.954771 kernel: Normal empty Dec 13 01:47:19.954778 kernel: Movable zone start for each node Dec 13 01:47:19.954784 kernel: Early memory node ranges Dec 13 01:47:19.954790 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:47:19.954797 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:47:19.954803 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:47:19.954809 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:47:19.954815 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:47:19.954821 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:47:19.954827 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:47:19.954834 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:47:19.954840 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:47:19.954848 kernel: psci: probing for conduit method from ACPI. Dec 13 01:47:19.954854 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:47:19.954861 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:47:19.954873 kernel: psci: Trusted OS migration not required Dec 13 01:47:19.954880 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:47:19.954887 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:47:19.954894 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:47:19.954901 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:47:19.954908 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:47:19.954914 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:47:19.954921 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:47:19.954928 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:47:19.954934 kernel: CPU features: detected: Spectre-v4 Dec 13 01:47:19.954941 kernel: CPU features: detected: Spectre-BHB Dec 13 01:47:19.954947 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:47:19.954954 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:47:19.954962 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:47:19.954969 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:47:19.954975 kernel: alternatives: applying boot alternatives Dec 13 01:47:19.954984 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:47:19.954991 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:47:19.954998 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:47:19.955004 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:47:19.955011 kernel: Fallback order for Node 0: 0 Dec 13 01:47:19.955018 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:47:19.955024 kernel: Policy zone: DMA Dec 13 01:47:19.955031 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:47:19.955039 kernel: software IO TLB: area num 4. Dec 13 01:47:19.955045 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:47:19.955052 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Dec 13 01:47:19.955059 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:47:19.955066 kernel: trace event string verifier disabled Dec 13 01:47:19.955072 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:47:19.955079 kernel: rcu: RCU event tracing is enabled. Dec 13 01:47:19.955086 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:47:19.955093 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:47:19.955099 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:47:19.955106 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:47:19.955113 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:47:19.955120 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:47:19.955127 kernel: GICv3: 256 SPIs implemented Dec 13 01:47:19.955133 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:47:19.955140 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:47:19.955147 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:47:19.955153 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:47:19.955160 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:47:19.955167 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:47:19.955174 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:47:19.955180 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:47:19.955187 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:47:19.955195 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:47:19.955202 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:47:19.955208 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:47:19.955215 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:47:19.955222 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:47:19.955229 kernel: arm-pv: using stolen time PV Dec 13 01:47:19.955236 kernel: Console: colour dummy device 80x25 Dec 13 01:47:19.955242 kernel: ACPI: Core revision 20230628 Dec 13 01:47:19.955249 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:47:19.955256 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:47:19.955264 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:47:19.955271 kernel: landlock: Up and running. Dec 13 01:47:19.955278 kernel: SELinux: Initializing. Dec 13 01:47:19.955285 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:47:19.955292 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:47:19.955299 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:47:19.955306 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:47:19.955312 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:47:19.955319 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:47:19.955327 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:47:19.955334 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:47:19.955341 kernel: Remapping and enabling EFI services. Dec 13 01:47:19.955361 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:47:19.955367 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:47:19.955374 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:47:19.955381 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:47:19.955388 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:47:19.955395 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:47:19.955402 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:47:19.955410 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:47:19.955417 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:47:19.955428 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:47:19.955437 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:47:19.955444 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:47:19.955451 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:47:19.955459 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:47:19.955466 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:47:19.955473 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:47:19.955481 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:47:19.955489 kernel: SMP: Total of 4 processors activated. Dec 13 01:47:19.955496 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:47:19.955503 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:47:19.955510 kernel: CPU features: detected: Common not Private translations Dec 13 01:47:19.955517 kernel: CPU features: detected: CRC32 instructions Dec 13 01:47:19.955524 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:47:19.955532 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:47:19.955540 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:47:19.955547 kernel: CPU features: detected: Privileged Access Never Dec 13 01:47:19.955554 kernel: CPU features: detected: RAS Extension Support Dec 13 01:47:19.955561 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:47:19.955569 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:47:19.955587 kernel: alternatives: applying system-wide alternatives Dec 13 01:47:19.955598 kernel: devtmpfs: initialized Dec 13 01:47:19.955606 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:47:19.955613 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:47:19.955623 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:47:19.955630 kernel: SMBIOS 3.0.0 present. Dec 13 01:47:19.955638 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:47:19.955645 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:47:19.955652 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:47:19.955659 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:47:19.955667 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:47:19.955674 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:47:19.955681 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Dec 13 01:47:19.955690 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:47:19.955697 kernel: cpuidle: using governor menu Dec 13 01:47:19.955704 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:47:19.955711 kernel: ASID allocator initialised with 32768 entries Dec 13 01:47:19.955719 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:47:19.955726 kernel: Serial: AMBA PL011 UART driver Dec 13 01:47:19.955733 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:47:19.955740 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:47:19.955747 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:47:19.955756 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:47:19.955763 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:47:19.955771 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:47:19.955778 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:47:19.955785 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:47:19.955792 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:47:19.955799 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:47:19.955806 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:47:19.955814 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:47:19.955822 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:47:19.955829 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:47:19.955836 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:47:19.955844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:47:19.955851 kernel: ACPI: Interpreter enabled Dec 13 01:47:19.955858 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:47:19.955865 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:47:19.955872 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:47:19.955879 kernel: printk: console [ttyAMA0] enabled Dec 13 01:47:19.955888 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:47:19.956025 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:47:19.956098 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:47:19.956163 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:47:19.956228 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:47:19.956293 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:47:19.956303 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:47:19.956312 kernel: PCI host bridge to bus 0000:00 Dec 13 01:47:19.956380 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:47:19.956439 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:47:19.956497 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:47:19.956556 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:47:19.956668 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:47:19.956742 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:47:19.956810 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:47:19.956877 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:47:19.956942 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:47:19.957007 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:47:19.957072 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:47:19.957136 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:47:19.957195 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:47:19.957254 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:47:19.957312 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:47:19.957321 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:47:19.957329 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:47:19.957336 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:47:19.957343 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:47:19.957351 kernel: iommu: Default domain type: Translated Dec 13 01:47:19.957358 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:47:19.957367 kernel: efivars: Registered efivars operations Dec 13 01:47:19.957374 kernel: vgaarb: loaded Dec 13 01:47:19.957381 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:47:19.957388 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:47:19.957396 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:47:19.957403 kernel: pnp: PnP ACPI init Dec 13 01:47:19.957470 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:47:19.957480 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:47:19.957489 kernel: NET: Registered PF_INET protocol family Dec 13 01:47:19.957496 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:47:19.957504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:47:19.957511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:47:19.957519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:47:19.957526 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:47:19.957533 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:47:19.957540 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:47:19.957548 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:47:19.957556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:47:19.957563 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:47:19.957577 kernel: kvm [1]: HYP mode not available Dec 13 01:47:19.957594 kernel: Initialise system trusted keyrings Dec 13 01:47:19.957601 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:47:19.957608 kernel: Key type asymmetric registered Dec 13 01:47:19.957616 kernel: Asymmetric key parser 'x509' registered Dec 13 01:47:19.957623 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:47:19.957630 kernel: io scheduler mq-deadline registered Dec 13 01:47:19.957640 kernel: io scheduler kyber registered Dec 13 01:47:19.957647 kernel: io scheduler bfq registered Dec 13 01:47:19.957654 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:47:19.957662 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:47:19.957669 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:47:19.957742 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:47:19.957752 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:47:19.957760 kernel: thunder_xcv, ver 1.0 Dec 13 01:47:19.957767 kernel: thunder_bgx, ver 1.0 Dec 13 01:47:19.957776 kernel: nicpf, ver 1.0 Dec 13 01:47:19.957783 kernel: nicvf, ver 1.0 Dec 13 01:47:19.957860 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:47:19.957922 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:47:19 UTC (1734054439) Dec 13 01:47:19.957931 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:47:19.957939 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:47:19.957946 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:47:19.957953 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:47:19.957962 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:47:19.957970 kernel: Segment Routing with IPv6 Dec 13 01:47:19.957977 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:47:19.957984 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:47:19.957991 kernel: Key type dns_resolver registered Dec 13 01:47:19.957998 kernel: registered taskstats version 1 Dec 13 01:47:19.958006 kernel: Loading compiled-in X.509 certificates Dec 13 01:47:19.958013 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:47:19.958020 kernel: Key type .fscrypt registered Dec 13 01:47:19.958029 kernel: Key type fscrypt-provisioning registered Dec 13 01:47:19.958037 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:47:19.958044 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:47:19.958051 kernel: ima: No architecture policies found Dec 13 01:47:19.958059 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:47:19.958066 kernel: clk: Disabling unused clocks Dec 13 01:47:19.958073 kernel: Freeing unused kernel memory: 39360K Dec 13 01:47:19.958080 kernel: Run /init as init process Dec 13 01:47:19.958087 kernel: with arguments: Dec 13 01:47:19.958096 kernel: /init Dec 13 01:47:19.958103 kernel: with environment: Dec 13 01:47:19.958110 kernel: HOME=/ Dec 13 01:47:19.958117 kernel: TERM=linux Dec 13 01:47:19.958124 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:47:19.958133 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:47:19.958142 systemd[1]: Detected virtualization kvm. Dec 13 01:47:19.958150 systemd[1]: Detected architecture arm64. Dec 13 01:47:19.958159 systemd[1]: Running in initrd. Dec 13 01:47:19.958167 systemd[1]: No hostname configured, using default hostname. Dec 13 01:47:19.958174 systemd[1]: Hostname set to . Dec 13 01:47:19.958182 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:47:19.958190 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:47:19.958198 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:47:19.958206 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:47:19.958214 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:47:19.958223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:47:19.958231 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:47:19.958239 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:47:19.958248 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:47:19.958256 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:47:19.958263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:47:19.958273 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:47:19.958280 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:47:19.958288 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:47:19.958296 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:47:19.958304 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:47:19.958311 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:47:19.958319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:47:19.958327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:47:19.958335 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:47:19.958344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:47:19.958352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:47:19.958360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:47:19.958368 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:47:19.958375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:47:19.958383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:47:19.958391 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:47:19.958398 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:47:19.958406 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:47:19.958415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:47:19.958423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:47:19.958431 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:47:19.958439 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:47:19.958447 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:47:19.958456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:47:19.958464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:47:19.958488 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 01:47:19.958508 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:47:19.958516 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:47:19.958524 systemd-journald[237]: Journal started Dec 13 01:47:19.958543 systemd-journald[237]: Runtime Journal (/run/log/journal/85025ebe7af74c8380234720c2e88491) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:47:19.949843 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 01:47:19.960656 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:47:19.963595 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:47:19.964032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:47:19.968653 kernel: Bridge firewalling registered Dec 13 01:47:19.965781 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 01:47:19.966750 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:47:19.970845 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:47:19.975726 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:47:19.977728 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:47:19.980841 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:47:19.983319 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:47:19.985355 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:47:19.987824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:47:19.990377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:47:19.999306 dracut-cmdline[271]: dracut-dracut-053 Dec 13 01:47:20.001783 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:47:20.020121 systemd-resolved[274]: Positive Trust Anchors: Dec 13 01:47:20.020139 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:47:20.020171 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:47:20.024813 systemd-resolved[274]: Defaulting to hostname 'linux'. Dec 13 01:47:20.026185 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:47:20.029259 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:47:20.071623 kernel: SCSI subsystem initialized Dec 13 01:47:20.075600 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:47:20.085603 kernel: iscsi: registered transport (tcp) Dec 13 01:47:20.096619 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:47:20.096658 kernel: QLogic iSCSI HBA Driver Dec 13 01:47:20.137652 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:47:20.154748 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:47:20.173601 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:47:20.173651 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:47:20.174760 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:47:20.222621 kernel: raid6: neonx8 gen() 15777 MB/s Dec 13 01:47:20.239609 kernel: raid6: neonx4 gen() 15653 MB/s Dec 13 01:47:20.256608 kernel: raid6: neonx2 gen() 13177 MB/s Dec 13 01:47:20.273606 kernel: raid6: neonx1 gen() 10438 MB/s Dec 13 01:47:20.290606 kernel: raid6: int64x8 gen() 6924 MB/s Dec 13 01:47:20.307615 kernel: raid6: int64x4 gen() 7318 MB/s Dec 13 01:47:20.324615 kernel: raid6: int64x2 gen() 6115 MB/s Dec 13 01:47:20.341762 kernel: raid6: int64x1 gen() 5018 MB/s Dec 13 01:47:20.341807 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s Dec 13 01:47:20.359716 kernel: raid6: .... xor() 11817 MB/s, rmw enabled Dec 13 01:47:20.359750 kernel: raid6: using neon recovery algorithm Dec 13 01:47:20.364605 kernel: xor: measuring software checksum speed Dec 13 01:47:20.365814 kernel: 8regs : 17134 MB/sec Dec 13 01:47:20.365843 kernel: 32regs : 19679 MB/sec Dec 13 01:47:20.367065 kernel: arm64_neon : 26857 MB/sec Dec 13 01:47:20.367076 kernel: xor: using function: arm64_neon (26857 MB/sec) Dec 13 01:47:20.418600 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:47:20.428994 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:47:20.440851 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:47:20.452634 systemd-udevd[458]: Using default interface naming scheme 'v255'. Dec 13 01:47:20.455790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:47:20.459018 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:47:20.473293 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Dec 13 01:47:20.498271 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:47:20.509749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:47:20.548708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:47:20.556738 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:47:20.569783 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:47:20.571342 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:47:20.573402 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:47:20.575700 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:47:20.585802 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:47:20.595903 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:47:20.601046 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:47:20.607709 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:47:20.607817 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:47:20.607828 kernel: GPT:9289727 != 19775487 Dec 13 01:47:20.607837 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:47:20.607846 kernel: GPT:9289727 != 19775487 Dec 13 01:47:20.607861 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:47:20.607871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:47:20.609285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:47:20.609400 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:47:20.615577 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:47:20.617035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:47:20.617177 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:47:20.619227 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:47:20.630858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:47:20.634071 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (517) Dec 13 01:47:20.634107 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (504) Dec 13 01:47:20.648681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:47:20.656314 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:47:20.660955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:47:20.665536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:47:20.669517 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:47:20.670820 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:47:20.687769 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:47:20.692760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:47:20.695727 disk-uuid[548]: Primary Header is updated. Dec 13 01:47:20.695727 disk-uuid[548]: Secondary Entries is updated. Dec 13 01:47:20.695727 disk-uuid[548]: Secondary Header is updated. Dec 13 01:47:20.698991 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:47:20.715953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:47:21.709904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:47:21.709954 disk-uuid[549]: The operation has completed successfully. Dec 13 01:47:21.731833 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:47:21.731945 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:47:21.749749 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:47:21.755106 sh[571]: Success Dec 13 01:47:21.771407 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:47:21.799225 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:47:21.816948 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:47:21.818646 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:47:21.831604 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:47:21.831638 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:47:21.831649 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:47:21.831660 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:47:21.833028 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:47:21.837181 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:47:21.838249 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:47:21.853749 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:47:21.855327 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:47:21.862290 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:47:21.862328 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:47:21.862339 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:47:21.865604 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:47:21.872680 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:47:21.874651 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:47:21.879177 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:47:21.885759 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:47:21.947231 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:47:21.956765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:47:21.981482 systemd-networkd[764]: lo: Link UP Dec 13 01:47:21.981494 systemd-networkd[764]: lo: Gained carrier Dec 13 01:47:21.982146 systemd-networkd[764]: Enumeration completed Dec 13 01:47:21.982233 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:47:21.982729 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:47:21.982732 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:47:21.988928 ignition[663]: Ignition 2.19.0 Dec 13 01:47:21.983539 systemd-networkd[764]: eth0: Link UP Dec 13 01:47:21.988937 ignition[663]: Stage: fetch-offline Dec 13 01:47:21.983542 systemd-networkd[764]: eth0: Gained carrier Dec 13 01:47:21.988972 ignition[663]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:47:21.983549 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:47:21.988980 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:47:21.983877 systemd[1]: Reached target network.target - Network. Dec 13 01:47:21.989135 ignition[663]: parsed url from cmdline: "" Dec 13 01:47:21.989140 ignition[663]: no config URL provided Dec 13 01:47:21.989145 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:47:21.999619 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:47:21.989152 ignition[663]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:47:21.989174 ignition[663]: op(1): [started] loading QEMU firmware config module Dec 13 01:47:21.989178 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:47:21.997035 ignition[663]: op(1): [finished] loading QEMU firmware config module Dec 13 01:47:21.997054 ignition[663]: QEMU firmware config was not found. Ignoring... Dec 13 01:47:22.040498 ignition[663]: parsing config with SHA512: 3f5a39360e071eae30aa0422822f08bb307439dc3f4cca46dfb7aef297c99d71329114c3cb313d0a77a2ea1a0e386cfe769e06160949984641e04af3bb165507 Dec 13 01:47:22.045305 unknown[663]: fetched base config from "system" Dec 13 01:47:22.045320 unknown[663]: fetched user config from "qemu" Dec 13 01:47:22.046235 ignition[663]: fetch-offline: fetch-offline passed Dec 13 01:47:22.046630 ignition[663]: Ignition finished successfully Dec 13 01:47:22.048501 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:47:22.049864 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:47:22.059806 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:47:22.069525 ignition[771]: Ignition 2.19.0 Dec 13 01:47:22.069535 ignition[771]: Stage: kargs Dec 13 01:47:22.069740 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:47:22.069750 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:47:22.070613 ignition[771]: kargs: kargs passed Dec 13 01:47:22.073127 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:47:22.070654 ignition[771]: Ignition finished successfully Dec 13 01:47:22.082740 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:47:22.091522 ignition[779]: Ignition 2.19.0 Dec 13 01:47:22.091532 ignition[779]: Stage: disks Dec 13 01:47:22.091717 ignition[779]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:47:22.091727 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:47:22.092523 ignition[779]: disks: disks passed Dec 13 01:47:22.092579 ignition[779]: Ignition finished successfully Dec 13 01:47:22.096166 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:47:22.097838 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:47:22.099032 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:47:22.101065 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:47:22.102964 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:47:22.104650 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:47:22.116723 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:47:22.125776 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:47:22.129510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:47:22.131613 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:47:22.175614 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:47:22.175665 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:47:22.176908 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:47:22.192684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:47:22.195193 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:47:22.196225 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:47:22.196268 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:47:22.196291 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:47:22.202622 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:47:22.206532 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Dec 13 01:47:22.206555 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:47:22.206572 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:47:22.205374 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:47:22.209937 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:47:22.211603 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:47:22.212812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:47:22.246648 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:47:22.250877 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:47:22.254624 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:47:22.258184 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:47:22.325467 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:47:22.334749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:47:22.337040 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:47:22.342594 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:47:22.357885 ignition[913]: INFO : Ignition 2.19.0 Dec 13 01:47:22.357885 ignition[913]: INFO : Stage: mount Dec 13 01:47:22.360221 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:47:22.360221 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:47:22.360221 ignition[913]: INFO : mount: mount passed Dec 13 01:47:22.360221 ignition[913]: INFO : Ignition finished successfully Dec 13 01:47:22.358716 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:47:22.361819 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:47:22.372732 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:47:22.827604 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:47:22.843755 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:47:22.850654 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Dec 13 01:47:22.850682 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:47:22.850693 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:47:22.852207 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:47:22.854598 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:47:22.855457 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:47:22.881966 ignition[942]: INFO : Ignition 2.19.0 Dec 13 01:47:22.881966 ignition[942]: INFO : Stage: files Dec 13 01:47:22.883516 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:47:22.883516 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:47:22.883516 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:47:22.887184 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:47:22.887184 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:47:22.887184 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:47:22.887184 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:47:22.887184 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:47:22.886416 unknown[942]: wrote ssh authorized keys file for user: core Dec 13 01:47:22.894402 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:47:22.894402 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:47:22.956361 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:47:23.062171 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:47:23.064322 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:47:23.393429 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:47:23.774817 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:47:23.774817 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:47:23.778268 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:47:23.778268 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:47:23.778268 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:47:23.778268 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:47:23.778268 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:47:23.778268 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:47:23.778268 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:47:23.778268 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:47:23.811657 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:47:23.816446 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:47:23.819095 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:47:23.819095 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:47:23.819095 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:47:23.819095 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:47:23.819095 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:47:23.819095 ignition[942]: INFO : files: files passed Dec 13 01:47:23.819095 ignition[942]: INFO : Ignition finished successfully Dec 13 01:47:23.819530 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:47:23.837757 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:47:23.840987 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:47:23.843166 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:47:23.843255 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:47:23.849477 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:47:23.853189 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:47:23.853189 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:47:23.856245 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:47:23.857570 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:47:23.859073 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:47:23.868713 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:47:23.891644 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:47:23.891750 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:47:23.893870 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:47:23.895722 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:47:23.897498 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:47:23.898225 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:47:23.918697 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:47:23.921703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:47:23.931921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:47:23.933140 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:47:23.935160 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:47:23.936940 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:47:23.937045 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:47:23.937743 systemd-networkd[764]: eth0: Gained IPv6LL Dec 13 01:47:23.939746 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:47:23.941793 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:47:23.943245 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:47:23.945138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:47:23.946899 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:47:23.948851 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:47:23.950713 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:47:23.952803 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:47:23.953889 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:47:23.954944 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:47:23.956479 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:47:23.956621 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:47:23.959417 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:47:23.961346 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:47:23.963032 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:47:23.963665 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:47:23.964983 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:47:23.965091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:47:23.967632 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:47:23.967749 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:47:23.969867 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:47:23.971375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:47:23.974658 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:47:23.976711 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:47:23.978563 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:47:23.980137 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:47:23.980227 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:47:23.981936 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:47:23.982020 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:47:23.984134 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:47:23.984239 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:47:23.985941 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:47:23.986042 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:47:23.996845 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:47:23.997727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:47:23.997864 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:47:24.002773 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:47:24.003632 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:47:24.003763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:47:24.005607 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:47:24.005711 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:47:24.012453 ignition[996]: INFO : Ignition 2.19.0 Dec 13 01:47:24.012453 ignition[996]: INFO : Stage: umount Dec 13 01:47:24.017264 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:47:24.017264 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:47:24.017264 ignition[996]: INFO : umount: umount passed Dec 13 01:47:24.017264 ignition[996]: INFO : Ignition finished successfully Dec 13 01:47:24.012545 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:47:24.012678 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:47:24.016517 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:47:24.017026 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:47:24.017119 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:47:24.018795 systemd[1]: Stopped target network.target - Network. Dec 13 01:47:24.020148 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:47:24.020229 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:47:24.022236 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:47:24.022284 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:47:24.023916 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:47:24.023965 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:47:24.025574 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:47:24.025635 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:47:24.027526 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:47:24.029463 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:47:24.031374 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:47:24.031462 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:47:24.033434 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:47:24.033529 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:47:24.034643 systemd-networkd[764]: eth0: DHCPv6 lease lost Dec 13 01:47:24.036255 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:47:24.036384 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:47:24.038000 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:47:24.038038 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:47:24.046681 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:47:24.048420 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:47:24.048479 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:47:24.050491 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:47:24.053985 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:47:24.055769 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:47:24.064373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:47:24.064453 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:47:24.066121 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:47:24.066168 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:47:24.068236 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:47:24.068283 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:47:24.070549 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:47:24.070750 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:47:24.073780 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:47:24.073857 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:47:24.076181 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:47:24.076237 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:47:24.078097 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:47:24.078131 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:47:24.079894 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:47:24.079941 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:47:24.082797 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:47:24.082840 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:47:24.085693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:47:24.085737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:47:24.094705 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:47:24.096265 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:47:24.096320 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:47:24.098221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:47:24.098266 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:47:24.102201 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:47:24.103623 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:47:24.105043 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:47:24.107252 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:47:24.116603 systemd[1]: Switching root. Dec 13 01:47:24.150612 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 01:47:24.150653 systemd-journald[237]: Journal stopped Dec 13 01:47:24.817882 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:47:24.817936 kernel: SELinux: policy capability open_perms=1 Dec 13 01:47:24.817948 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:47:24.817957 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:47:24.817969 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:47:24.817982 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:47:24.817995 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:47:24.818004 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:47:24.818017 kernel: audit: type=1403 audit(1734054444.282:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:47:24.818028 systemd[1]: Successfully loaded SELinux policy in 30.236ms. Dec 13 01:47:24.818047 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.295ms. Dec 13 01:47:24.818059 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:47:24.818070 systemd[1]: Detected virtualization kvm. Dec 13 01:47:24.818080 systemd[1]: Detected architecture arm64. Dec 13 01:47:24.818090 systemd[1]: Detected first boot. Dec 13 01:47:24.818101 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:47:24.818111 zram_generator::config[1043]: No configuration found. Dec 13 01:47:24.818122 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:47:24.818134 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:47:24.818144 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:47:24.818154 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:47:24.818165 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:47:24.818176 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:47:24.818187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:47:24.818197 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:47:24.818212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:47:24.818222 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:47:24.818234 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:47:24.818244 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:47:24.818255 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:47:24.818265 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:47:24.818276 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:47:24.818289 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:47:24.818300 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:47:24.818311 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:47:24.818321 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:47:24.818333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:47:24.818343 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:47:24.818353 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:47:24.818364 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:47:24.818375 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:47:24.818385 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:47:24.818396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:47:24.818406 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:47:24.818418 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:47:24.818428 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:47:24.818439 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:47:24.818450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:47:24.818461 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:47:24.818471 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:47:24.818483 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:47:24.818493 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:47:24.818503 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:47:24.818516 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:47:24.818527 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:47:24.818538 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:47:24.818548 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:47:24.818567 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:47:24.818579 systemd[1]: Reached target machines.target - Containers. Dec 13 01:47:24.818599 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:47:24.818609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:47:24.818622 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:47:24.818632 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:47:24.818643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:47:24.818654 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:47:24.818664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:47:24.818674 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:47:24.818685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:47:24.818697 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:47:24.818707 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:47:24.818719 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:47:24.818729 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:47:24.818739 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:47:24.818749 kernel: fuse: init (API version 7.39) Dec 13 01:47:24.818759 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:47:24.818769 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:47:24.818780 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:47:24.818789 kernel: loop: module loaded Dec 13 01:47:24.818799 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:47:24.818811 kernel: ACPI: bus type drm_connector registered Dec 13 01:47:24.818840 systemd-journald[1107]: Collecting audit messages is disabled. Dec 13 01:47:24.818863 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:47:24.818874 systemd-journald[1107]: Journal started Dec 13 01:47:24.818893 systemd-journald[1107]: Runtime Journal (/run/log/journal/85025ebe7af74c8380234720c2e88491) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:47:24.818929 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:47:24.632207 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:47:24.646060 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:47:24.646380 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:47:24.820987 systemd[1]: Stopped verity-setup.service. Dec 13 01:47:24.823603 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:47:24.825052 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:47:24.826393 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:47:24.827602 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:47:24.828669 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:47:24.830138 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:47:24.831441 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:47:24.832693 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:47:24.834136 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:47:24.834271 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:47:24.835778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:47:24.835903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:47:24.837727 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:47:24.837862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:47:24.839162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:47:24.839300 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:47:24.840748 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:47:24.842138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:47:24.842266 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:47:24.843656 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:47:24.843776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:47:24.845083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:47:24.846387 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:47:24.847839 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:47:24.859780 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:47:24.870680 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:47:24.872659 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:47:24.873808 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:47:24.873849 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:47:24.875738 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:47:24.877871 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:47:24.879939 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:47:24.881069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:47:24.882757 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:47:24.884732 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:47:24.885982 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:47:24.886965 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:47:24.888151 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:47:24.892722 systemd-journald[1107]: Time spent on flushing to /var/log/journal/85025ebe7af74c8380234720c2e88491 is 24.345ms for 853 entries. Dec 13 01:47:24.892722 systemd-journald[1107]: System Journal (/var/log/journal/85025ebe7af74c8380234720c2e88491) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:47:24.928870 systemd-journald[1107]: Received client request to flush runtime journal. Dec 13 01:47:24.928911 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 01:47:24.928925 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:47:24.892788 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:47:24.895961 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:47:24.900511 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:47:24.902920 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:47:24.904351 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:47:24.905637 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:47:24.908723 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:47:24.910352 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:47:24.915255 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:47:24.926686 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:47:24.931749 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:47:24.934237 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:47:24.938627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:47:24.949182 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:47:24.956871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:47:24.959445 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:47:24.959601 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:47:24.960243 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:47:24.964681 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:47:24.991159 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Dec 13 01:47:24.991176 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Dec 13 01:47:24.994645 kernel: loop2: detected capacity change from 0 to 189592 Dec 13 01:47:24.996143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:47:25.040616 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:47:25.045598 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:47:25.049613 kernel: loop5: detected capacity change from 0 to 189592 Dec 13 01:47:25.053972 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:47:25.054374 (sd-merge)[1180]: Merged extensions into '/usr'. Dec 13 01:47:25.057874 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:47:25.057895 systemd[1]: Reloading... Dec 13 01:47:25.098608 zram_generator::config[1203]: No configuration found. Dec 13 01:47:25.161546 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:47:25.200334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:47:25.235483 systemd[1]: Reloading finished in 177 ms. Dec 13 01:47:25.269333 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:47:25.270769 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:47:25.285751 systemd[1]: Starting ensure-sysext.service... Dec 13 01:47:25.287604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:47:25.294428 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:47:25.294445 systemd[1]: Reloading... Dec 13 01:47:25.304817 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:47:25.305365 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:47:25.306131 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:47:25.306440 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 01:47:25.306576 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 01:47:25.308988 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:47:25.309092 systemd-tmpfiles[1242]: Skipping /boot Dec 13 01:47:25.316210 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:47:25.316315 systemd-tmpfiles[1242]: Skipping /boot Dec 13 01:47:25.340610 zram_generator::config[1269]: No configuration found. Dec 13 01:47:25.420973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:47:25.456080 systemd[1]: Reloading finished in 161 ms. Dec 13 01:47:25.470591 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:47:25.482075 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:47:25.489526 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:47:25.492027 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:47:25.494483 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:47:25.497229 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:47:25.499830 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:47:25.503844 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:47:25.507169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:47:25.508233 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:47:25.512363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:47:25.515885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:47:25.518531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:47:25.523668 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:47:25.525362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:47:25.525493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:47:25.527027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:47:25.527148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:47:25.528852 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:47:25.528974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:47:25.533538 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:47:25.544261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:47:25.546401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:47:25.550657 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Dec 13 01:47:25.555870 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:47:25.558077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:47:25.559254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:47:25.565865 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:47:25.567538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:47:25.569295 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:47:25.570964 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:47:25.572707 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:47:25.572834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:47:25.574433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:47:25.574701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:47:25.576255 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:47:25.576372 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:47:25.590135 systemd[1]: Finished ensure-sysext.service. Dec 13 01:47:25.592567 augenrules[1353]: No rules Dec 13 01:47:25.596541 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:47:25.599812 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:47:25.603065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:47:25.613869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:47:25.616258 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:47:25.618402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:47:25.620633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:47:25.621715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:47:25.629114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:47:25.633717 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:47:25.635532 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:47:25.636094 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:47:25.638403 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1351) Dec 13 01:47:25.638455 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1348) Dec 13 01:47:25.640082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:47:25.640238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:47:25.641870 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:47:25.642002 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:47:25.644606 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1351) Dec 13 01:47:25.644636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:47:25.644758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:47:25.647026 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:47:25.647174 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:47:25.653395 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:47:25.661165 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:47:25.661226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:47:25.669876 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:47:25.672597 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:47:25.733047 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:47:25.735020 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:47:25.736674 systemd-resolved[1310]: Positive Trust Anchors: Dec 13 01:47:25.736687 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:47:25.736720 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:47:25.741694 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:47:25.751568 systemd-networkd[1379]: lo: Link UP Dec 13 01:47:25.751616 systemd-networkd[1379]: lo: Gained carrier Dec 13 01:47:25.752319 systemd-networkd[1379]: Enumeration completed Dec 13 01:47:25.752460 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:47:25.753114 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:47:25.753123 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:47:25.753971 systemd-networkd[1379]: eth0: Link UP Dec 13 01:47:25.753981 systemd-networkd[1379]: eth0: Gained carrier Dec 13 01:47:25.753995 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:47:25.762114 systemd-resolved[1310]: Defaulting to hostname 'linux'. Dec 13 01:47:25.773628 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:47:25.773847 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:47:25.774226 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Dec 13 01:47:25.776147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:47:25.777483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:47:25.777715 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:47:25.777755 systemd-timesyncd[1380]: Initial clock synchronization to Fri 2024-12-13 01:47:26.057593 UTC. Dec 13 01:47:25.779071 systemd[1]: Reached target network.target - Network. Dec 13 01:47:25.780047 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:47:25.790010 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:47:25.801743 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:47:25.827639 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:47:25.836058 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:47:25.880155 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:47:25.881671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:47:25.884652 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:47:25.885756 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:47:25.886949 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:47:25.888306 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:47:25.889480 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:47:25.890896 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:47:25.892152 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:47:25.892191 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:47:25.893124 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:47:25.895030 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:47:25.897401 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:47:25.902542 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:47:25.904679 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:47:25.906222 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:47:25.907429 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:47:25.908423 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:47:25.909456 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:47:25.909487 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:47:25.910343 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:47:25.912322 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:47:25.912681 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:47:25.915345 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:47:25.918796 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:47:25.921513 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:47:25.924571 jq[1413]: false Dec 13 01:47:25.924620 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:47:25.927606 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:47:25.930363 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:47:25.933520 extend-filesystems[1414]: Found loop3 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found loop4 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found loop5 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found vda Dec 13 01:47:25.934783 extend-filesystems[1414]: Found vda1 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found vda2 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found vda3 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found usr Dec 13 01:47:25.934783 extend-filesystems[1414]: Found vda4 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found vda6 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found vda7 Dec 13 01:47:25.934783 extend-filesystems[1414]: Found vda9 Dec 13 01:47:25.934783 extend-filesystems[1414]: Checking size of /dev/vda9 Dec 13 01:47:25.936701 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:47:25.941457 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:47:25.946850 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:47:25.947286 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:47:25.948095 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:47:25.952001 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:47:25.957622 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:47:25.960296 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:47:25.962695 jq[1431]: true Dec 13 01:47:25.960618 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:47:25.960916 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:47:25.961059 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:47:25.963046 extend-filesystems[1414]: Resized partition /dev/vda9 Dec 13 01:47:25.965159 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:47:25.966827 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:47:25.972955 dbus-daemon[1412]: [system] SELinux support is enabled Dec 13 01:47:25.974232 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:47:25.975568 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:47:25.985826 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:47:25.985866 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:47:25.988600 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:47:25.988941 jq[1438]: true Dec 13 01:47:25.989513 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:47:25.989541 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:47:25.991797 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:47:25.997087 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1359) Dec 13 01:47:25.997912 systemd-logind[1427]: New seat seat0. Dec 13 01:47:25.999131 update_engine[1429]: I20241213 01:47:25.998822 1429 main.cc:92] Flatcar Update Engine starting Dec 13 01:47:26.004158 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:47:26.008270 update_engine[1429]: I20241213 01:47:26.008077 1429 update_check_scheduler.cc:74] Next update check in 9m50s Dec 13 01:47:26.012960 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:47:26.024046 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:47:26.015577 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:47:26.024253 tar[1436]: linux-arm64/helm Dec 13 01:47:26.019757 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:47:26.024550 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:47:26.024550 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:47:26.024550 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:47:26.030687 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Dec 13 01:47:26.025526 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:47:26.027864 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:47:26.087665 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:47:26.090972 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:47:26.093509 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:47:26.101159 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:47:26.203623 containerd[1449]: time="2024-12-13T01:47:26.203452891Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:47:26.230776 containerd[1449]: time="2024-12-13T01:47:26.230682909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232167 containerd[1449]: time="2024-12-13T01:47:26.232110457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232167 containerd[1449]: time="2024-12-13T01:47:26.232149621Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:47:26.232167 containerd[1449]: time="2024-12-13T01:47:26.232167878Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:47:26.232352 containerd[1449]: time="2024-12-13T01:47:26.232331407Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:47:26.232378 containerd[1449]: time="2024-12-13T01:47:26.232359518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232442 containerd[1449]: time="2024-12-13T01:47:26.232424143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232468 containerd[1449]: time="2024-12-13T01:47:26.232443228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232653 containerd[1449]: time="2024-12-13T01:47:26.232631101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232688 containerd[1449]: time="2024-12-13T01:47:26.232655444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232688 containerd[1449]: time="2024-12-13T01:47:26.232670762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232688 containerd[1449]: time="2024-12-13T01:47:26.232680822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232774 containerd[1449]: time="2024-12-13T01:47:26.232759233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:26.232985 containerd[1449]: time="2024-12-13T01:47:26.232957248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:47:26.233076 containerd[1449]: time="2024-12-13T01:47:26.233059671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:47:26.233097 containerd[1449]: time="2024-12-13T01:47:26.233078715Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:47:26.233170 containerd[1449]: time="2024-12-13T01:47:26.233156961Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:47:26.233213 containerd[1449]: time="2024-12-13T01:47:26.233201590Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:47:26.236737 containerd[1449]: time="2024-12-13T01:47:26.236706042Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:47:26.236808 containerd[1449]: time="2024-12-13T01:47:26.236762511Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:47:26.236808 containerd[1449]: time="2024-12-13T01:47:26.236781141Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:47:26.236808 containerd[1449]: time="2024-12-13T01:47:26.236796211Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:47:26.236928 containerd[1449]: time="2024-12-13T01:47:26.236817614Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:47:26.237118 containerd[1449]: time="2024-12-13T01:47:26.237096028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238516331Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238712566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238751524Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238770567Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238791474Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238810187Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238833371Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238852870Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238869844Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238888474Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238912320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238928342Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238954051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241651 containerd[1449]: time="2024-12-13T01:47:26.238971895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.238988496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239006339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239019670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239036892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239052749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239069598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239086282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239113027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239134223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239148299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239164321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239187546Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239213670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239231471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.241950 containerd[1449]: time="2024-12-13T01:47:26.239247121Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:47:26.242194 containerd[1449]: time="2024-12-13T01:47:26.239378027Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:47:26.242194 containerd[1449]: time="2024-12-13T01:47:26.239402784Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:47:26.242194 containerd[1449]: time="2024-12-13T01:47:26.239419013Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:47:26.242194 containerd[1449]: time="2024-12-13T01:47:26.239433834Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:47:26.242194 containerd[1449]: time="2024-12-13T01:47:26.239447951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.242194 containerd[1449]: time="2024-12-13T01:47:26.239579934Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:47:26.242194 containerd[1449]: time="2024-12-13T01:47:26.239604691Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:47:26.242194 containerd[1449]: time="2024-12-13T01:47:26.239638887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.240022208Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.240091801Z" level=info msg="Connect containerd service" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.240130178Z" level=info msg="using legacy CRI server" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.240137341Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.240232188Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.240992247Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.241112182Z" level=info msg="Start subscribing containerd event" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.241154451Z" level=info msg="Start recovering state" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.241394114Z" level=info msg="Start event monitor" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.241407528Z" level=info msg="Start snapshots syncer" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.241422598Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:47:26.242328 containerd[1449]: time="2024-12-13T01:47:26.241431002Z" level=info msg="Start streaming server" Dec 13 01:47:26.243299 containerd[1449]: time="2024-12-13T01:47:26.243272382Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:47:26.243428 containerd[1449]: time="2024-12-13T01:47:26.243412852Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:47:26.243545 containerd[1449]: time="2024-12-13T01:47:26.243530800Z" level=info msg="containerd successfully booted in 0.041427s" Dec 13 01:47:26.243735 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:47:26.369206 tar[1436]: linux-arm64/LICENSE Dec 13 01:47:26.369331 tar[1436]: linux-arm64/README.md Dec 13 01:47:26.382004 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:47:27.050381 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:47:27.070706 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:47:27.081849 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:47:27.087042 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:47:27.087236 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:47:27.091859 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:47:27.101116 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:47:27.103934 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:47:27.106113 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:47:27.107438 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:47:27.203380 systemd-networkd[1379]: eth0: Gained IPv6LL Dec 13 01:47:27.206741 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:47:27.208765 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:47:27.225891 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:47:27.228382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:27.230569 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:47:27.246087 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:47:27.246306 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:47:27.248148 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:47:27.250669 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:47:27.738227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:27.739802 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:47:27.744279 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:27.745852 systemd[1]: Startup finished in 596ms (kernel) + 4.547s (initrd) + 3.496s (userspace) = 8.641s. Dec 13 01:47:28.163656 kubelet[1524]: E1213 01:47:28.163532 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:28.165969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:28.166123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:32.838442 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:47:32.839624 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:56216.service - OpenSSH per-connection server daemon (10.0.0.1:56216). Dec 13 01:47:32.887773 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 56216 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:47:32.891221 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:32.898347 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:47:32.914849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:47:32.916665 systemd-logind[1427]: New session 1 of user core. Dec 13 01:47:32.923345 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:47:32.925435 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:47:32.931547 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:47:33.001594 systemd[1541]: Queued start job for default target default.target. Dec 13 01:47:33.010471 systemd[1541]: Created slice app.slice - User Application Slice. Dec 13 01:47:33.010524 systemd[1541]: Reached target paths.target - Paths. Dec 13 01:47:33.010537 systemd[1541]: Reached target timers.target - Timers. Dec 13 01:47:33.011719 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:47:33.020571 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:47:33.020658 systemd[1541]: Reached target sockets.target - Sockets. Dec 13 01:47:33.020670 systemd[1541]: Reached target basic.target - Basic System. Dec 13 01:47:33.020705 systemd[1541]: Reached target default.target - Main User Target. Dec 13 01:47:33.020729 systemd[1541]: Startup finished in 84ms. Dec 13 01:47:33.021260 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:47:33.022640 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:47:33.083784 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:56218.service - OpenSSH per-connection server daemon (10.0.0.1:56218). Dec 13 01:47:33.116966 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 56218 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:47:33.118075 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:33.124184 systemd-logind[1427]: New session 2 of user core. Dec 13 01:47:33.135753 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:47:33.187242 sshd[1552]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:33.194939 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:56218.service: Deactivated successfully. Dec 13 01:47:33.196391 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:47:33.197727 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:47:33.199087 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:56228.service - OpenSSH per-connection server daemon (10.0.0.1:56228). Dec 13 01:47:33.200041 systemd-logind[1427]: Removed session 2. Dec 13 01:47:33.235242 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 56228 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:47:33.236685 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:33.242402 systemd-logind[1427]: New session 3 of user core. Dec 13 01:47:33.247786 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:47:33.296260 sshd[1559]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:33.312012 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:56228.service: Deactivated successfully. Dec 13 01:47:33.313327 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:47:33.315395 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:47:33.315820 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:56236.service - OpenSSH per-connection server daemon (10.0.0.1:56236). Dec 13 01:47:33.316948 systemd-logind[1427]: Removed session 3. Dec 13 01:47:33.350158 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 56236 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:47:33.351340 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:33.355993 systemd-logind[1427]: New session 4 of user core. Dec 13 01:47:33.367801 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:47:33.423884 sshd[1566]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:33.434996 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:56236.service: Deactivated successfully. Dec 13 01:47:33.436387 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:47:33.438900 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:56242.service - OpenSSH per-connection server daemon (10.0.0.1:56242). Dec 13 01:47:33.439694 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:47:33.440753 systemd-logind[1427]: Removed session 4. Dec 13 01:47:33.479305 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 56242 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:47:33.480548 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:33.490620 systemd-logind[1427]: New session 5 of user core. Dec 13 01:47:33.498906 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:47:33.566082 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:47:33.566374 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:47:33.581434 sudo[1576]: pam_unix(sudo:session): session closed for user root Dec 13 01:47:33.583842 sshd[1573]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:33.594999 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:56242.service: Deactivated successfully. Dec 13 01:47:33.596574 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:47:33.598191 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:47:33.599417 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:56250.service - OpenSSH per-connection server daemon (10.0.0.1:56250). Dec 13 01:47:33.601748 systemd-logind[1427]: Removed session 5. Dec 13 01:47:33.637850 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 56250 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:47:33.638979 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:33.642666 systemd-logind[1427]: New session 6 of user core. Dec 13 01:47:33.653763 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:47:33.704556 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:47:33.704873 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:47:33.708015 sudo[1585]: pam_unix(sudo:session): session closed for user root Dec 13 01:47:33.712358 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:47:33.712870 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:47:33.730862 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:47:33.732982 auditctl[1588]: No rules Dec 13 01:47:33.733907 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:47:33.734691 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:47:33.736360 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:47:33.763080 augenrules[1606]: No rules Dec 13 01:47:33.763580 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:47:33.766987 sudo[1584]: pam_unix(sudo:session): session closed for user root Dec 13 01:47:33.768727 sshd[1581]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:33.786941 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:56250.service: Deactivated successfully. Dec 13 01:47:33.788429 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:47:33.791948 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:47:33.795872 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:56258.service - OpenSSH per-connection server daemon (10.0.0.1:56258). Dec 13 01:47:33.796893 systemd-logind[1427]: Removed session 6. Dec 13 01:47:33.827525 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 56258 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:47:33.828791 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:47:33.833154 systemd-logind[1427]: New session 7 of user core. Dec 13 01:47:33.843825 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:47:33.902071 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:47:33.903103 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:47:34.231900 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:47:34.231931 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:47:34.488052 dockerd[1637]: time="2024-12-13T01:47:34.487924262Z" level=info msg="Starting up" Dec 13 01:47:34.625195 dockerd[1637]: time="2024-12-13T01:47:34.625144601Z" level=info msg="Loading containers: start." Dec 13 01:47:34.710656 kernel: Initializing XFRM netlink socket Dec 13 01:47:34.777189 systemd-networkd[1379]: docker0: Link UP Dec 13 01:47:34.794923 dockerd[1637]: time="2024-12-13T01:47:34.794856274Z" level=info msg="Loading containers: done." Dec 13 01:47:34.805856 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3381129475-merged.mount: Deactivated successfully. Dec 13 01:47:34.809741 dockerd[1637]: time="2024-12-13T01:47:34.809693324Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:47:34.809868 dockerd[1637]: time="2024-12-13T01:47:34.809796065Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:47:34.809932 dockerd[1637]: time="2024-12-13T01:47:34.809908764Z" level=info msg="Daemon has completed initialization" Dec 13 01:47:34.835346 dockerd[1637]: time="2024-12-13T01:47:34.835187682Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:47:34.835469 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:47:35.450896 containerd[1449]: time="2024-12-13T01:47:35.450835506Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:47:36.191958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363924708.mount: Deactivated successfully. Dec 13 01:47:37.702369 containerd[1449]: time="2024-12-13T01:47:37.702311022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:37.703004 containerd[1449]: time="2024-12-13T01:47:37.702974363Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615587" Dec 13 01:47:37.703682 containerd[1449]: time="2024-12-13T01:47:37.703636534Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:37.706677 containerd[1449]: time="2024-12-13T01:47:37.706625298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:37.707794 containerd[1449]: time="2024-12-13T01:47:37.707751095Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.25686917s" Dec 13 01:47:37.707850 containerd[1449]: time="2024-12-13T01:47:37.707793957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 01:47:37.708775 containerd[1449]: time="2024-12-13T01:47:37.708549596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:47:38.416434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:47:38.425765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:38.516723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:38.520560 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:38.565437 kubelet[1849]: E1213 01:47:38.564682 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:38.568248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:38.568408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:39.417631 containerd[1449]: time="2024-12-13T01:47:39.417546230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:39.418096 containerd[1449]: time="2024-12-13T01:47:39.418060180Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470098" Dec 13 01:47:39.418948 containerd[1449]: time="2024-12-13T01:47:39.418915342Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:39.421724 containerd[1449]: time="2024-12-13T01:47:39.421664554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:39.422934 containerd[1449]: time="2024-12-13T01:47:39.422888577Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.714306098s" Dec 13 01:47:39.423195 containerd[1449]: time="2024-12-13T01:47:39.423020667Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 01:47:39.423471 containerd[1449]: time="2024-12-13T01:47:39.423444062Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:47:40.864134 containerd[1449]: time="2024-12-13T01:47:40.863810388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:40.865055 containerd[1449]: time="2024-12-13T01:47:40.864992935Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024204" Dec 13 01:47:40.865671 containerd[1449]: time="2024-12-13T01:47:40.865633534Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:40.868523 containerd[1449]: time="2024-12-13T01:47:40.868470441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:40.869726 containerd[1449]: time="2024-12-13T01:47:40.869696945Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.446211517s" Dec 13 01:47:40.869776 containerd[1449]: time="2024-12-13T01:47:40.869735673Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 01:47:40.870137 containerd[1449]: time="2024-12-13T01:47:40.870101798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:47:41.876012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300965388.mount: Deactivated successfully. Dec 13 01:47:42.166758 containerd[1449]: time="2024-12-13T01:47:42.166644722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:42.167608 containerd[1449]: time="2024-12-13T01:47:42.167357254Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Dec 13 01:47:42.168300 containerd[1449]: time="2024-12-13T01:47:42.168096696Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:42.170378 containerd[1449]: time="2024-12-13T01:47:42.170306469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:42.171053 containerd[1449]: time="2024-12-13T01:47:42.171020728Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.300878405s" Dec 13 01:47:42.171124 containerd[1449]: time="2024-12-13T01:47:42.171054226Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 01:47:42.171731 containerd[1449]: time="2024-12-13T01:47:42.171711771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:47:42.815919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145135706.mount: Deactivated successfully. Dec 13 01:47:43.805724 containerd[1449]: time="2024-12-13T01:47:43.805678244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:43.807661 containerd[1449]: time="2024-12-13T01:47:43.807530758Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 01:47:43.809674 containerd[1449]: time="2024-12-13T01:47:43.808702458Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:43.814615 containerd[1449]: time="2024-12-13T01:47:43.811332011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:43.816262 containerd[1449]: time="2024-12-13T01:47:43.816225880Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.644485114s" Dec 13 01:47:43.816361 containerd[1449]: time="2024-12-13T01:47:43.816344668Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:47:43.817180 containerd[1449]: time="2024-12-13T01:47:43.817158961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:47:44.367538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900810634.mount: Deactivated successfully. Dec 13 01:47:44.375881 containerd[1449]: time="2024-12-13T01:47:44.375312315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:44.377364 containerd[1449]: time="2024-12-13T01:47:44.377129485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 13 01:47:44.379636 containerd[1449]: time="2024-12-13T01:47:44.379581537Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:44.383159 containerd[1449]: time="2024-12-13T01:47:44.383121099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:44.384238 containerd[1449]: time="2024-12-13T01:47:44.384204556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 566.59369ms" Dec 13 01:47:44.384627 containerd[1449]: time="2024-12-13T01:47:44.384326621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 01:47:44.385009 containerd[1449]: time="2024-12-13T01:47:44.384917163Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:47:44.940537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778001223.mount: Deactivated successfully. Dec 13 01:47:47.613399 containerd[1449]: time="2024-12-13T01:47:47.613341969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:47.614636 containerd[1449]: time="2024-12-13T01:47:47.614605681Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Dec 13 01:47:47.615709 containerd[1449]: time="2024-12-13T01:47:47.615677467Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:47.619065 containerd[1449]: time="2024-12-13T01:47:47.619029596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:47:47.620407 containerd[1449]: time="2024-12-13T01:47:47.620363096Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.235413001s" Dec 13 01:47:47.620452 containerd[1449]: time="2024-12-13T01:47:47.620405907Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 01:47:48.668733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:47:48.678938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:48.768871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:48.772541 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:47:48.807081 kubelet[2004]: E1213 01:47:48.806986 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:48.809667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:48.809802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:51.974751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:51.990226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:52.014182 systemd[1]: Reloading requested from client PID 2019 ('systemctl') (unit session-7.scope)... Dec 13 01:47:52.014198 systemd[1]: Reloading... Dec 13 01:47:52.081706 zram_generator::config[2061]: No configuration found. Dec 13 01:47:52.163579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:47:52.215487 systemd[1]: Reloading finished in 200 ms. Dec 13 01:47:52.253521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:52.256145 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:47:52.256339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:52.257788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:52.350762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:52.354543 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:47:52.391191 kubelet[2105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:47:52.391191 kubelet[2105]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:47:52.391191 kubelet[2105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:47:52.391528 kubelet[2105]: I1213 01:47:52.391389 2105 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:47:53.412431 kubelet[2105]: I1213 01:47:53.412383 2105 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:47:53.412431 kubelet[2105]: I1213 01:47:53.412417 2105 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:47:53.412786 kubelet[2105]: I1213 01:47:53.412681 2105 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:47:53.454568 kubelet[2105]: E1213 01:47:53.454526 2105 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:53.455262 kubelet[2105]: I1213 01:47:53.455235 2105 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:47:53.464916 kubelet[2105]: E1213 01:47:53.464745 2105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:47:53.464916 kubelet[2105]: I1213 01:47:53.464779 2105 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:47:53.468319 kubelet[2105]: I1213 01:47:53.468291 2105 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:47:53.473044 kubelet[2105]: I1213 01:47:53.470944 2105 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:47:53.473044 kubelet[2105]: I1213 01:47:53.471099 2105 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:47:53.473044 kubelet[2105]: I1213 01:47:53.471123 2105 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:47:53.473210 kubelet[2105]: I1213 01:47:53.473165 2105 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:47:53.473210 kubelet[2105]: I1213 01:47:53.473178 2105 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:47:53.473392 kubelet[2105]: I1213 01:47:53.473358 2105 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:47:53.475458 kubelet[2105]: I1213 01:47:53.475427 2105 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:47:53.475458 kubelet[2105]: I1213 01:47:53.475459 2105 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:47:53.475533 kubelet[2105]: I1213 01:47:53.475483 2105 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:47:53.475533 kubelet[2105]: I1213 01:47:53.475493 2105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:47:53.483419 kubelet[2105]: I1213 01:47:53.478542 2105 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:47:53.483419 kubelet[2105]: I1213 01:47:53.481456 2105 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:47:53.484900 kubelet[2105]: W1213 01:47:53.484740 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Dec 13 01:47:53.484900 kubelet[2105]: E1213 01:47:53.484805 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:53.485210 kubelet[2105]: W1213 01:47:53.485118 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Dec 13 01:47:53.485210 kubelet[2105]: E1213 01:47:53.485167 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:53.486468 kubelet[2105]: W1213 01:47:53.486441 2105 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:47:53.487127 kubelet[2105]: I1213 01:47:53.487104 2105 server.go:1269] "Started kubelet" Dec 13 01:47:53.487473 kubelet[2105]: I1213 01:47:53.487422 2105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:47:53.487521 kubelet[2105]: I1213 01:47:53.487495 2105 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:47:53.487707 kubelet[2105]: I1213 01:47:53.487690 2105 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:47:53.488603 kubelet[2105]: I1213 01:47:53.488564 2105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:47:53.489508 kubelet[2105]: I1213 01:47:53.489215 2105 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:47:53.493790 kubelet[2105]: I1213 01:47:53.493758 2105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:47:53.494439 kubelet[2105]: I1213 01:47:53.494415 2105 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:47:53.494521 kubelet[2105]: I1213 01:47:53.494501 2105 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:47:53.494562 kubelet[2105]: I1213 01:47:53.494553 2105 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:47:53.494906 kubelet[2105]: W1213 01:47:53.494862 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Dec 13 01:47:53.494941 kubelet[2105]: E1213 01:47:53.494912 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:53.495655 kubelet[2105]: E1213 01:47:53.495617 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:47:53.495655 kubelet[2105]: E1213 01:47:53.495626 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" Dec 13 01:47:53.496962 kubelet[2105]: I1213 01:47:53.496522 2105 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:47:53.496962 kubelet[2105]: I1213 01:47:53.496619 2105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:47:53.497919 kubelet[2105]: E1213 01:47:53.497899 2105 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:47:53.498205 kubelet[2105]: I1213 01:47:53.498190 2105 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:47:53.498967 kubelet[2105]: E1213 01:47:53.498005 2105 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810995b315fe9a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:47:53.48708394 +0000 UTC m=+1.129617159,LastTimestamp:2024-12-13 01:47:53.48708394 +0000 UTC m=+1.129617159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:47:53.510879 kubelet[2105]: I1213 01:47:53.510660 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:47:53.511501 kubelet[2105]: I1213 01:47:53.511473 2105 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:47:53.511501 kubelet[2105]: I1213 01:47:53.511492 2105 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:47:53.511574 kubelet[2105]: I1213 01:47:53.511507 2105 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:47:53.511948 kubelet[2105]: I1213 01:47:53.511930 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:47:53.511984 kubelet[2105]: I1213 01:47:53.511955 2105 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:47:53.511984 kubelet[2105]: I1213 01:47:53.511973 2105 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:47:53.512144 kubelet[2105]: E1213 01:47:53.512034 2105 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:47:53.595888 kubelet[2105]: E1213 01:47:53.595849 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:47:53.612132 kubelet[2105]: E1213 01:47:53.612101 2105 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:47:53.623047 kubelet[2105]: I1213 01:47:53.623012 2105 policy_none.go:49] "None policy: Start" Dec 13 01:47:53.623766 kubelet[2105]: W1213 01:47:53.623573 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Dec 13 01:47:53.623766 kubelet[2105]: I1213 01:47:53.623632 2105 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:47:53.623766 kubelet[2105]: I1213 01:47:53.623686 2105 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:47:53.623766 kubelet[2105]: E1213 01:47:53.623682 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:53.630255 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:47:53.648611 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:47:53.651431 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:47:53.670503 kubelet[2105]: I1213 01:47:53.670453 2105 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:47:53.671087 kubelet[2105]: I1213 01:47:53.670696 2105 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:47:53.671087 kubelet[2105]: I1213 01:47:53.670709 2105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:47:53.671087 kubelet[2105]: I1213 01:47:53.670971 2105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:47:53.672154 kubelet[2105]: E1213 01:47:53.672129 2105 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:47:53.696726 kubelet[2105]: E1213 01:47:53.696681 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" Dec 13 01:47:53.771987 kubelet[2105]: I1213 01:47:53.771950 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:47:53.772384 kubelet[2105]: E1213 01:47:53.772351 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Dec 13 01:47:53.820183 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Dec 13 01:47:53.838714 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Dec 13 01:47:53.853055 systemd[1]: Created slice kubepods-burstable-pode379065ffacbed8f224deb7db0e0eb10.slice - libcontainer container kubepods-burstable-pode379065ffacbed8f224deb7db0e0eb10.slice. Dec 13 01:47:53.895683 kubelet[2105]: I1213 01:47:53.895654 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:53.895683 kubelet[2105]: I1213 01:47:53.895684 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:53.895794 kubelet[2105]: I1213 01:47:53.895714 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:47:53.895794 kubelet[2105]: I1213 01:47:53.895731 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e379065ffacbed8f224deb7db0e0eb10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e379065ffacbed8f224deb7db0e0eb10\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:47:53.895794 kubelet[2105]: I1213 01:47:53.895748 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:53.895794 kubelet[2105]: I1213 01:47:53.895763 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:53.895794 kubelet[2105]: I1213 01:47:53.895791 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:53.895890 kubelet[2105]: I1213 01:47:53.895809 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e379065ffacbed8f224deb7db0e0eb10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e379065ffacbed8f224deb7db0e0eb10\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:47:53.895890 kubelet[2105]: I1213 01:47:53.895828 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e379065ffacbed8f224deb7db0e0eb10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e379065ffacbed8f224deb7db0e0eb10\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:47:53.973884 kubelet[2105]: I1213 01:47:53.973803 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:47:53.974107 kubelet[2105]: E1213 01:47:53.974068 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Dec 13 01:47:54.097864 kubelet[2105]: E1213 01:47:54.097825 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" Dec 13 01:47:54.138131 kubelet[2105]: E1213 01:47:54.138092 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:54.138685 containerd[1449]: time="2024-12-13T01:47:54.138649933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Dec 13 01:47:54.151919 kubelet[2105]: E1213 01:47:54.151897 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:54.152434 containerd[1449]: time="2024-12-13T01:47:54.152357691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Dec 13 01:47:54.155559 kubelet[2105]: E1213 01:47:54.155527 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:54.155888 containerd[1449]: time="2024-12-13T01:47:54.155860484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e379065ffacbed8f224deb7db0e0eb10,Namespace:kube-system,Attempt:0,}" Dec 13 01:47:54.375776 kubelet[2105]: I1213 01:47:54.375694 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:47:54.376033 kubelet[2105]: E1213 01:47:54.376009 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Dec 13 01:47:54.540339 kubelet[2105]: W1213 01:47:54.540236 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Dec 13 01:47:54.540339 kubelet[2105]: E1213 01:47:54.540307 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:54.586003 kubelet[2105]: W1213 01:47:54.585912 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Dec 13 01:47:54.586003 kubelet[2105]: E1213 01:47:54.585970 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:54.611995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924313101.mount: Deactivated successfully. Dec 13 01:47:54.617549 containerd[1449]: time="2024-12-13T01:47:54.617506791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:47:54.619192 containerd[1449]: time="2024-12-13T01:47:54.618463226Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:47:54.619192 containerd[1449]: time="2024-12-13T01:47:54.618776366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:47:54.619192 containerd[1449]: time="2024-12-13T01:47:54.619184265Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:47:54.619785 containerd[1449]: time="2024-12-13T01:47:54.619751097Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:47:54.620626 containerd[1449]: time="2024-12-13T01:47:54.620565014Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:47:54.620998 containerd[1449]: time="2024-12-13T01:47:54.620976796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:47:54.623558 containerd[1449]: time="2024-12-13T01:47:54.623518229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:47:54.625107 containerd[1449]: time="2024-12-13T01:47:54.625075164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.657223ms" Dec 13 01:47:54.627983 containerd[1449]: time="2024-12-13T01:47:54.627667560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.941884ms" Dec 13 01:47:54.628504 containerd[1449]: time="2024-12-13T01:47:54.628471908Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.546571ms" Dec 13 01:47:54.759254 containerd[1449]: time="2024-12-13T01:47:54.759121866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:47:54.759254 containerd[1449]: time="2024-12-13T01:47:54.759193886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:47:54.759254 containerd[1449]: time="2024-12-13T01:47:54.759221989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:47:54.759523 containerd[1449]: time="2024-12-13T01:47:54.759323954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:47:54.759784 containerd[1449]: time="2024-12-13T01:47:54.759695383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:47:54.759784 containerd[1449]: time="2024-12-13T01:47:54.759736657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:47:54.759784 containerd[1449]: time="2024-12-13T01:47:54.759746746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:47:54.760038 containerd[1449]: time="2024-12-13T01:47:54.759819446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:47:54.760566 containerd[1449]: time="2024-12-13T01:47:54.760497850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:47:54.760745 containerd[1449]: time="2024-12-13T01:47:54.760673516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:47:54.760745 containerd[1449]: time="2024-12-13T01:47:54.760696736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:47:54.761478 containerd[1449]: time="2024-12-13T01:47:54.761377862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:47:54.781780 systemd[1]: Started cri-containerd-634385286f5eaa6f199897f0268414d048de2c86007a89bd025ec6cee17988d2.scope - libcontainer container 634385286f5eaa6f199897f0268414d048de2c86007a89bd025ec6cee17988d2. Dec 13 01:47:54.782554 kubelet[2105]: W1213 01:47:54.782180 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Dec 13 01:47:54.782554 kubelet[2105]: E1213 01:47:54.782234 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:54.783391 systemd[1]: Started cri-containerd-a607d32618b48769e93ccf37d979df8c4f31fa736cba473c54976180e94a2653.scope - libcontainer container a607d32618b48769e93ccf37d979df8c4f31fa736cba473c54976180e94a2653. Dec 13 01:47:54.784501 systemd[1]: Started cri-containerd-ac5190cd7701bb46f115179959c869a4d9236bce0636f4a3ec5d89a9a121a794.scope - libcontainer container ac5190cd7701bb46f115179959c869a4d9236bce0636f4a3ec5d89a9a121a794. Dec 13 01:47:54.817928 containerd[1449]: time="2024-12-13T01:47:54.817832725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e379065ffacbed8f224deb7db0e0eb10,Namespace:kube-system,Attempt:0,} returns sandbox id \"a607d32618b48769e93ccf37d979df8c4f31fa736cba473c54976180e94a2653\"" Dec 13 01:47:54.817928 containerd[1449]: time="2024-12-13T01:47:54.817860188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"634385286f5eaa6f199897f0268414d048de2c86007a89bd025ec6cee17988d2\"" Dec 13 01:47:54.820364 kubelet[2105]: E1213 01:47:54.820061 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:54.820364 kubelet[2105]: E1213 01:47:54.820260 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:54.824846 containerd[1449]: time="2024-12-13T01:47:54.824802921Z" level=info msg="CreateContainer within sandbox \"634385286f5eaa6f199897f0268414d048de2c86007a89bd025ec6cee17988d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:47:54.825275 containerd[1449]: time="2024-12-13T01:47:54.825159297Z" level=info msg="CreateContainer within sandbox \"a607d32618b48769e93ccf37d979df8c4f31fa736cba473c54976180e94a2653\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:47:54.826703 containerd[1449]: time="2024-12-13T01:47:54.826671355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac5190cd7701bb46f115179959c869a4d9236bce0636f4a3ec5d89a9a121a794\"" Dec 13 01:47:54.827570 kubelet[2105]: E1213 01:47:54.827423 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:54.828756 containerd[1449]: time="2024-12-13T01:47:54.828713132Z" level=info msg="CreateContainer within sandbox \"ac5190cd7701bb46f115179959c869a4d9236bce0636f4a3ec5d89a9a121a794\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:47:54.850311 containerd[1449]: time="2024-12-13T01:47:54.850259048Z" level=info msg="CreateContainer within sandbox \"634385286f5eaa6f199897f0268414d048de2c86007a89bd025ec6cee17988d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"20b1e26a9b85e6fd73d108dfd6504b40676cd2e5820c75470958aa59ca5b4f1d\"" Dec 13 01:47:54.851005 containerd[1449]: time="2024-12-13T01:47:54.850971521Z" level=info msg="StartContainer for \"20b1e26a9b85e6fd73d108dfd6504b40676cd2e5820c75470958aa59ca5b4f1d\"" Dec 13 01:47:54.851789 containerd[1449]: time="2024-12-13T01:47:54.851758095Z" level=info msg="CreateContainer within sandbox \"a607d32618b48769e93ccf37d979df8c4f31fa736cba473c54976180e94a2653\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e305498d0cb279957e3867c024b4b3b25a4577bf8fe448d90b3f17329c321a3\"" Dec 13 01:47:54.852403 containerd[1449]: time="2024-12-13T01:47:54.852380933Z" level=info msg="StartContainer for \"9e305498d0cb279957e3867c024b4b3b25a4577bf8fe448d90b3f17329c321a3\"" Dec 13 01:47:54.854031 containerd[1449]: time="2024-12-13T01:47:54.853994314Z" level=info msg="CreateContainer within sandbox \"ac5190cd7701bb46f115179959c869a4d9236bce0636f4a3ec5d89a9a121a794\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d6dd333e3ace536fadbb0365cb3aaeaa725e5818b67d44a47cae88f726dac4ae\"" Dec 13 01:47:54.855599 containerd[1449]: time="2024-12-13T01:47:54.854459501Z" level=info msg="StartContainer for \"d6dd333e3ace536fadbb0365cb3aaeaa725e5818b67d44a47cae88f726dac4ae\"" Dec 13 01:47:54.874021 systemd[1]: Started cri-containerd-20b1e26a9b85e6fd73d108dfd6504b40676cd2e5820c75470958aa59ca5b4f1d.scope - libcontainer container 20b1e26a9b85e6fd73d108dfd6504b40676cd2e5820c75470958aa59ca5b4f1d. Dec 13 01:47:54.888807 systemd[1]: Started cri-containerd-d6dd333e3ace536fadbb0365cb3aaeaa725e5818b67d44a47cae88f726dac4ae.scope - libcontainer container d6dd333e3ace536fadbb0365cb3aaeaa725e5818b67d44a47cae88f726dac4ae. Dec 13 01:47:54.892168 systemd[1]: Started cri-containerd-9e305498d0cb279957e3867c024b4b3b25a4577bf8fe448d90b3f17329c321a3.scope - libcontainer container 9e305498d0cb279957e3867c024b4b3b25a4577bf8fe448d90b3f17329c321a3. Dec 13 01:47:54.899053 kubelet[2105]: E1213 01:47:54.898992 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="1.6s" Dec 13 01:47:54.903706 kubelet[2105]: W1213 01:47:54.903512 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Dec 13 01:47:54.903706 kubelet[2105]: E1213 01:47:54.903667 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:47:54.923804 containerd[1449]: time="2024-12-13T01:47:54.923762888Z" level=info msg="StartContainer for \"20b1e26a9b85e6fd73d108dfd6504b40676cd2e5820c75470958aa59ca5b4f1d\" returns successfully" Dec 13 01:47:54.950484 containerd[1449]: time="2024-12-13T01:47:54.950435106Z" level=info msg="StartContainer for \"d6dd333e3ace536fadbb0365cb3aaeaa725e5818b67d44a47cae88f726dac4ae\" returns successfully" Dec 13 01:47:54.950629 containerd[1449]: time="2024-12-13T01:47:54.950450479Z" level=info msg="StartContainer for \"9e305498d0cb279957e3867c024b4b3b25a4577bf8fe448d90b3f17329c321a3\" returns successfully" Dec 13 01:47:55.180596 kubelet[2105]: I1213 01:47:55.178144 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:47:55.522324 kubelet[2105]: E1213 01:47:55.522230 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:55.524856 kubelet[2105]: E1213 01:47:55.524827 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:55.526615 kubelet[2105]: E1213 01:47:55.526333 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:56.528267 kubelet[2105]: E1213 01:47:56.528048 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:57.262141 kubelet[2105]: E1213 01:47:57.262092 2105 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:47:57.449671 kubelet[2105]: I1213 01:47:57.449624 2105 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:47:57.449798 kubelet[2105]: E1213 01:47:57.449700 2105 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 01:47:57.458405 kubelet[2105]: E1213 01:47:57.458370 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:47:57.464183 kubelet[2105]: E1213 01:47:57.464033 2105 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1810995b315fe9a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:47:53.48708394 +0000 UTC m=+1.129617159,LastTimestamp:2024-12-13 01:47:53.48708394 +0000 UTC m=+1.129617159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:47:57.518152 kubelet[2105]: E1213 01:47:57.517796 2105 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1810995b3204c98f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:47:53.497889167 +0000 UTC m=+1.140422386,LastTimestamp:2024-12-13 01:47:53.497889167 +0000 UTC m=+1.140422386,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:47:57.529272 kubelet[2105]: E1213 01:47:57.529201 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:57.559277 kubelet[2105]: E1213 01:47:57.559246 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:47:57.571390 kubelet[2105]: E1213 01:47:57.571280 2105 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1810995b32c0d2b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:47:53.510212276 +0000 UTC m=+1.152745495,LastTimestamp:2024-12-13 01:47:53.510212276 +0000 UTC m=+1.152745495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:47:57.660384 kubelet[2105]: E1213 01:47:57.659704 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:47:58.480006 kubelet[2105]: I1213 01:47:58.479805 2105 apiserver.go:52] "Watching apiserver" Dec 13 01:47:58.495429 kubelet[2105]: I1213 01:47:58.495394 2105 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:47:59.312846 systemd[1]: Reloading requested from client PID 2376 ('systemctl') (unit session-7.scope)... Dec 13 01:47:59.312861 systemd[1]: Reloading... Dec 13 01:47:59.385619 zram_generator::config[2418]: No configuration found. Dec 13 01:47:59.466087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:47:59.497600 kubelet[2105]: E1213 01:47:59.495419 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:59.530979 kubelet[2105]: E1213 01:47:59.530945 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:47:59.531802 systemd[1]: Reloading finished in 218 ms. Dec 13 01:47:59.562141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:59.578723 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:47:59.580655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:59.580715 systemd[1]: kubelet.service: Consumed 1.536s CPU time, 119.2M memory peak, 0B memory swap peak. Dec 13 01:47:59.588839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:47:59.676985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:47:59.680460 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:47:59.717899 kubelet[2458]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:47:59.717899 kubelet[2458]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:47:59.717899 kubelet[2458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:47:59.717899 kubelet[2458]: I1213 01:47:59.716959 2458 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:47:59.723179 kubelet[2458]: I1213 01:47:59.723031 2458 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:47:59.723179 kubelet[2458]: I1213 01:47:59.723059 2458 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:47:59.724407 kubelet[2458]: I1213 01:47:59.723634 2458 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:47:59.728616 kubelet[2458]: I1213 01:47:59.727466 2458 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:47:59.730265 kubelet[2458]: I1213 01:47:59.730237 2458 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:47:59.732862 kubelet[2458]: E1213 01:47:59.732809 2458 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:47:59.732862 kubelet[2458]: I1213 01:47:59.732840 2458 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:47:59.735618 kubelet[2458]: I1213 01:47:59.735210 2458 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:47:59.735618 kubelet[2458]: I1213 01:47:59.735329 2458 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:47:59.735618 kubelet[2458]: I1213 01:47:59.735425 2458 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:47:59.735618 kubelet[2458]: I1213 01:47:59.735444 2458 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:47:59.735795 kubelet[2458]: I1213 01:47:59.735631 2458 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:47:59.735795 kubelet[2458]: I1213 01:47:59.735642 2458 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:47:59.735795 kubelet[2458]: I1213 01:47:59.735672 2458 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:47:59.735795 kubelet[2458]: I1213 01:47:59.735766 2458 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:47:59.735795 kubelet[2458]: I1213 01:47:59.735778 2458 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:47:59.735795 kubelet[2458]: I1213 01:47:59.735793 2458 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:47:59.735926 kubelet[2458]: I1213 01:47:59.735802 2458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:47:59.738051 kubelet[2458]: I1213 01:47:59.738026 2458 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:47:59.738500 kubelet[2458]: I1213 01:47:59.738472 2458 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:47:59.739630 kubelet[2458]: I1213 01:47:59.739577 2458 server.go:1269] "Started kubelet" Dec 13 01:47:59.741336 kubelet[2458]: I1213 01:47:59.741315 2458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:47:59.745100 kubelet[2458]: I1213 01:47:59.745025 2458 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:47:59.745733 kubelet[2458]: I1213 01:47:59.745693 2458 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:47:59.746588 kubelet[2458]: I1213 01:47:59.746014 2458 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:47:59.746588 kubelet[2458]: I1213 01:47:59.746264 2458 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:47:59.746588 kubelet[2458]: I1213 01:47:59.746421 2458 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:47:59.746775 kubelet[2458]: E1213 01:47:59.746756 2458 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:47:59.747341 kubelet[2458]: I1213 01:47:59.746526 2458 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:47:59.747855 kubelet[2458]: I1213 01:47:59.747826 2458 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:47:59.749674 kubelet[2458]: I1213 01:47:59.749646 2458 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:47:59.758282 kubelet[2458]: I1213 01:47:59.758002 2458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:47:59.760619 kubelet[2458]: I1213 01:47:59.759119 2458 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:47:59.760619 kubelet[2458]: I1213 01:47:59.760377 2458 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:47:59.760619 kubelet[2458]: I1213 01:47:59.760473 2458 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:47:59.762687 kubelet[2458]: I1213 01:47:59.762662 2458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:47:59.762791 kubelet[2458]: I1213 01:47:59.762780 2458 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:47:59.762863 kubelet[2458]: I1213 01:47:59.762853 2458 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:47:59.762953 kubelet[2458]: E1213 01:47:59.762933 2458 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:47:59.766990 kubelet[2458]: E1213 01:47:59.759662 2458 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:47:59.793727 kubelet[2458]: I1213 01:47:59.793693 2458 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:47:59.793727 kubelet[2458]: I1213 01:47:59.793716 2458 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:47:59.793727 kubelet[2458]: I1213 01:47:59.793735 2458 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:47:59.793923 kubelet[2458]: I1213 01:47:59.793900 2458 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:47:59.793951 kubelet[2458]: I1213 01:47:59.793916 2458 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:47:59.793951 kubelet[2458]: I1213 01:47:59.793935 2458 policy_none.go:49] "None policy: Start" Dec 13 01:47:59.795886 kubelet[2458]: I1213 01:47:59.795860 2458 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:47:59.795886 kubelet[2458]: I1213 01:47:59.795882 2458 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:47:59.796051 kubelet[2458]: I1213 01:47:59.796035 2458 state_mem.go:75] "Updated machine memory state" Dec 13 01:47:59.799551 kubelet[2458]: I1213 01:47:59.799525 2458 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:47:59.799989 kubelet[2458]: I1213 01:47:59.799959 2458 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:47:59.800037 kubelet[2458]: I1213 01:47:59.799977 2458 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:47:59.800255 kubelet[2458]: I1213 01:47:59.800225 2458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:47:59.869430 kubelet[2458]: E1213 01:47:59.869336 2458 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:47:59.903856 kubelet[2458]: I1213 01:47:59.903813 2458 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:47:59.909539 kubelet[2458]: I1213 01:47:59.909508 2458 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Dec 13 01:47:59.909686 kubelet[2458]: I1213 01:47:59.909578 2458 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:47:59.949291 kubelet[2458]: I1213 01:47:59.949245 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:59.949291 kubelet[2458]: I1213 01:47:59.949287 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:47:59.949410 kubelet[2458]: I1213 01:47:59.949308 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e379065ffacbed8f224deb7db0e0eb10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e379065ffacbed8f224deb7db0e0eb10\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:47:59.949410 kubelet[2458]: I1213 01:47:59.949326 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e379065ffacbed8f224deb7db0e0eb10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e379065ffacbed8f224deb7db0e0eb10\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:47:59.949410 kubelet[2458]: I1213 01:47:59.949344 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e379065ffacbed8f224deb7db0e0eb10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e379065ffacbed8f224deb7db0e0eb10\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:47:59.949410 kubelet[2458]: I1213 01:47:59.949369 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:59.949410 kubelet[2458]: I1213 01:47:59.949388 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:59.949517 kubelet[2458]: I1213 01:47:59.949408 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:47:59.949517 kubelet[2458]: I1213 01:47:59.949428 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:48:00.169435 kubelet[2458]: E1213 01:48:00.169399 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:00.169789 kubelet[2458]: E1213 01:48:00.169759 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:00.169933 kubelet[2458]: E1213 01:48:00.169901 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:00.737042 kubelet[2458]: I1213 01:48:00.737007 2458 apiserver.go:52] "Watching apiserver" Dec 13 01:48:00.747219 kubelet[2458]: I1213 01:48:00.747094 2458 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:48:00.782325 kubelet[2458]: E1213 01:48:00.782166 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:00.782881 kubelet[2458]: E1213 01:48:00.782861 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:00.795224 kubelet[2458]: E1213 01:48:00.795094 2458 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:48:00.795224 kubelet[2458]: E1213 01:48:00.795227 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:00.829456 kubelet[2458]: I1213 01:48:00.829376 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.82935837 podStartE2EDuration="1.82935837s" podCreationTimestamp="2024-12-13 01:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:00.814559969 +0000 UTC m=+1.131206024" watchObservedRunningTime="2024-12-13 01:48:00.82935837 +0000 UTC m=+1.146004385" Dec 13 01:48:00.847806 kubelet[2458]: I1213 01:48:00.847446 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8474307730000001 podStartE2EDuration="1.847430773s" podCreationTimestamp="2024-12-13 01:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:00.829683073 +0000 UTC m=+1.146329088" watchObservedRunningTime="2024-12-13 01:48:00.847430773 +0000 UTC m=+1.164076828" Dec 13 01:48:00.863761 kubelet[2458]: I1213 01:48:00.861687 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.861669099 podStartE2EDuration="1.861669099s" podCreationTimestamp="2024-12-13 01:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:00.847678592 +0000 UTC m=+1.164324727" watchObservedRunningTime="2024-12-13 01:48:00.861669099 +0000 UTC m=+1.178315154" Dec 13 01:48:01.785800 kubelet[2458]: E1213 01:48:01.785757 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:01.786130 kubelet[2458]: E1213 01:48:01.785824 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:01.797610 kubelet[2458]: E1213 01:48:01.797541 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:02.787259 kubelet[2458]: E1213 01:48:02.787228 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:04.433121 sudo[1618]: pam_unix(sudo:session): session closed for user root Dec 13 01:48:04.435694 sshd[1614]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:04.438621 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:56258.service: Deactivated successfully. Dec 13 01:48:04.440214 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:48:04.440413 systemd[1]: session-7.scope: Consumed 6.378s CPU time, 152.9M memory peak, 0B memory swap peak. Dec 13 01:48:04.440914 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:48:04.442048 systemd-logind[1427]: Removed session 7. Dec 13 01:48:05.548859 kubelet[2458]: I1213 01:48:05.548826 2458 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:48:05.550597 containerd[1449]: time="2024-12-13T01:48:05.550554158Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:48:05.551132 kubelet[2458]: I1213 01:48:05.550760 2458 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:48:05.887072 systemd[1]: Created slice kubepods-besteffort-poda2ba2033_d840_4373_8605_08f8f4b0f37c.slice - libcontainer container kubepods-besteffort-poda2ba2033_d840_4373_8605_08f8f4b0f37c.slice. Dec 13 01:48:05.891158 kubelet[2458]: I1213 01:48:05.891123 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ba2033-d840-4373-8605-08f8f4b0f37c-lib-modules\") pod \"kube-proxy-srfj8\" (UID: \"a2ba2033-d840-4373-8605-08f8f4b0f37c\") " pod="kube-system/kube-proxy-srfj8" Dec 13 01:48:05.891158 kubelet[2458]: I1213 01:48:05.891162 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6d2d\" (UniqueName: \"kubernetes.io/projected/a2ba2033-d840-4373-8605-08f8f4b0f37c-kube-api-access-j6d2d\") pod \"kube-proxy-srfj8\" (UID: \"a2ba2033-d840-4373-8605-08f8f4b0f37c\") " pod="kube-system/kube-proxy-srfj8" Dec 13 01:48:05.891274 kubelet[2458]: I1213 01:48:05.891184 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2ba2033-d840-4373-8605-08f8f4b0f37c-kube-proxy\") pod \"kube-proxy-srfj8\" (UID: \"a2ba2033-d840-4373-8605-08f8f4b0f37c\") " pod="kube-system/kube-proxy-srfj8" Dec 13 01:48:05.891274 kubelet[2458]: I1213 01:48:05.891201 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ba2033-d840-4373-8605-08f8f4b0f37c-xtables-lock\") pod \"kube-proxy-srfj8\" (UID: \"a2ba2033-d840-4373-8605-08f8f4b0f37c\") " pod="kube-system/kube-proxy-srfj8" Dec 13 01:48:05.999237 kubelet[2458]: E1213 01:48:05.999180 2458 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:48:05.999237 kubelet[2458]: E1213 01:48:05.999213 2458 projected.go:194] Error preparing data for projected volume kube-api-access-j6d2d for pod kube-system/kube-proxy-srfj8: configmap "kube-root-ca.crt" not found Dec 13 01:48:05.999382 kubelet[2458]: E1213 01:48:05.999301 2458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2ba2033-d840-4373-8605-08f8f4b0f37c-kube-api-access-j6d2d podName:a2ba2033-d840-4373-8605-08f8f4b0f37c nodeName:}" failed. No retries permitted until 2024-12-13 01:48:06.499283501 +0000 UTC m=+6.815929556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j6d2d" (UniqueName: "kubernetes.io/projected/a2ba2033-d840-4373-8605-08f8f4b0f37c-kube-api-access-j6d2d") pod "kube-proxy-srfj8" (UID: "a2ba2033-d840-4373-8605-08f8f4b0f37c") : configmap "kube-root-ca.crt" not found Dec 13 01:48:06.693168 systemd[1]: Created slice kubepods-besteffort-pod2ac28702_f19b_47e9_bee3_a50493ed40a0.slice - libcontainer container kubepods-besteffort-pod2ac28702_f19b_47e9_bee3_a50493ed40a0.slice. Dec 13 01:48:06.700607 kubelet[2458]: I1213 01:48:06.698319 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2ac28702-f19b-47e9-bee3-a50493ed40a0-var-lib-calico\") pod \"tigera-operator-76c4976dd7-t2x8d\" (UID: \"2ac28702-f19b-47e9-bee3-a50493ed40a0\") " pod="tigera-operator/tigera-operator-76c4976dd7-t2x8d" Dec 13 01:48:06.700607 kubelet[2458]: I1213 01:48:06.698365 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxvm\" (UniqueName: \"kubernetes.io/projected/2ac28702-f19b-47e9-bee3-a50493ed40a0-kube-api-access-mvxvm\") pod \"tigera-operator-76c4976dd7-t2x8d\" (UID: \"2ac28702-f19b-47e9-bee3-a50493ed40a0\") " pod="tigera-operator/tigera-operator-76c4976dd7-t2x8d" Dec 13 01:48:06.794748 kubelet[2458]: E1213 01:48:06.794716 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:06.795284 containerd[1449]: time="2024-12-13T01:48:06.795201168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srfj8,Uid:a2ba2033-d840-4373-8605-08f8f4b0f37c,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:06.815014 containerd[1449]: time="2024-12-13T01:48:06.814533104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:06.815014 containerd[1449]: time="2024-12-13T01:48:06.814615098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:06.815014 containerd[1449]: time="2024-12-13T01:48:06.814631424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:06.815014 containerd[1449]: time="2024-12-13T01:48:06.814708695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:06.838772 systemd[1]: Started cri-containerd-3f432c9724d3ebbbca607f5810e6f7aacd6dcb674596097a157fb8225945218c.scope - libcontainer container 3f432c9724d3ebbbca607f5810e6f7aacd6dcb674596097a157fb8225945218c. Dec 13 01:48:06.856934 containerd[1449]: time="2024-12-13T01:48:06.856894512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srfj8,Uid:a2ba2033-d840-4373-8605-08f8f4b0f37c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f432c9724d3ebbbca607f5810e6f7aacd6dcb674596097a157fb8225945218c\"" Dec 13 01:48:06.857670 kubelet[2458]: E1213 01:48:06.857648 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:06.861333 containerd[1449]: time="2024-12-13T01:48:06.861237828Z" level=info msg="CreateContainer within sandbox \"3f432c9724d3ebbbca607f5810e6f7aacd6dcb674596097a157fb8225945218c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:48:06.873329 containerd[1449]: time="2024-12-13T01:48:06.873289620Z" level=info msg="CreateContainer within sandbox \"3f432c9724d3ebbbca607f5810e6f7aacd6dcb674596097a157fb8225945218c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03b844e88a4179fbe200e9081831f9463acbb069c795df80d5fe3bfae83d1bf2\"" Dec 13 01:48:06.874872 containerd[1449]: time="2024-12-13T01:48:06.873988623Z" level=info msg="StartContainer for \"03b844e88a4179fbe200e9081831f9463acbb069c795df80d5fe3bfae83d1bf2\"" Dec 13 01:48:06.897808 systemd[1]: Started cri-containerd-03b844e88a4179fbe200e9081831f9463acbb069c795df80d5fe3bfae83d1bf2.scope - libcontainer container 03b844e88a4179fbe200e9081831f9463acbb069c795df80d5fe3bfae83d1bf2. Dec 13 01:48:06.919105 containerd[1449]: time="2024-12-13T01:48:06.919066969Z" level=info msg="StartContainer for \"03b844e88a4179fbe200e9081831f9463acbb069c795df80d5fe3bfae83d1bf2\" returns successfully" Dec 13 01:48:06.997285 containerd[1449]: time="2024-12-13T01:48:06.997193916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-t2x8d,Uid:2ac28702-f19b-47e9-bee3-a50493ed40a0,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:48:07.028974 containerd[1449]: time="2024-12-13T01:48:07.028521260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:07.028974 containerd[1449]: time="2024-12-13T01:48:07.028936899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:07.028974 containerd[1449]: time="2024-12-13T01:48:07.028966150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:07.029213 containerd[1449]: time="2024-12-13T01:48:07.029058506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:07.048792 systemd[1]: Started cri-containerd-919eaad56a0176373fa34ca1c2bf25022a5ae7240471b4ad099839d9628742e8.scope - libcontainer container 919eaad56a0176373fa34ca1c2bf25022a5ae7240471b4ad099839d9628742e8. Dec 13 01:48:07.074545 containerd[1449]: time="2024-12-13T01:48:07.074500729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-t2x8d,Uid:2ac28702-f19b-47e9-bee3-a50493ed40a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"919eaad56a0176373fa34ca1c2bf25022a5ae7240471b4ad099839d9628742e8\"" Dec 13 01:48:07.086239 containerd[1449]: time="2024-12-13T01:48:07.086217422Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:48:07.298142 kubelet[2458]: E1213 01:48:07.296930 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:07.604649 systemd[1]: run-containerd-runc-k8s.io-3f432c9724d3ebbbca607f5810e6f7aacd6dcb674596097a157fb8225945218c-runc.I0DSXD.mount: Deactivated successfully. Dec 13 01:48:07.796560 kubelet[2458]: E1213 01:48:07.796512 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:07.797728 kubelet[2458]: E1213 01:48:07.796525 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:07.812803 kubelet[2458]: I1213 01:48:07.812748 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-srfj8" podStartSLOduration=2.8127319870000003 podStartE2EDuration="2.812731987s" podCreationTimestamp="2024-12-13 01:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:07.812189059 +0000 UTC m=+8.128835114" watchObservedRunningTime="2024-12-13 01:48:07.812731987 +0000 UTC m=+8.129378042" Dec 13 01:48:08.943954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443208948.mount: Deactivated successfully. Dec 13 01:48:09.255003 containerd[1449]: time="2024-12-13T01:48:09.254889746Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:09.256409 containerd[1449]: time="2024-12-13T01:48:09.256238852Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125384" Dec 13 01:48:09.257085 containerd[1449]: time="2024-12-13T01:48:09.257033766Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:09.259202 containerd[1449]: time="2024-12-13T01:48:09.259153779Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:09.260207 containerd[1449]: time="2024-12-13T01:48:09.260051609Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.173779527s" Dec 13 01:48:09.260207 containerd[1449]: time="2024-12-13T01:48:09.260085741Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:48:09.264249 containerd[1449]: time="2024-12-13T01:48:09.264223931Z" level=info msg="CreateContainer within sandbox \"919eaad56a0176373fa34ca1c2bf25022a5ae7240471b4ad099839d9628742e8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:48:09.273457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721049880.mount: Deactivated successfully. Dec 13 01:48:09.275248 containerd[1449]: time="2024-12-13T01:48:09.275197802Z" level=info msg="CreateContainer within sandbox \"919eaad56a0176373fa34ca1c2bf25022a5ae7240471b4ad099839d9628742e8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5db6c0c9b41c98cfad3c6211f30525bd6408de4bd0168b512d9b7b68f2541eca\"" Dec 13 01:48:09.275665 containerd[1449]: time="2024-12-13T01:48:09.275632232Z" level=info msg="StartContainer for \"5db6c0c9b41c98cfad3c6211f30525bd6408de4bd0168b512d9b7b68f2541eca\"" Dec 13 01:48:09.300817 systemd[1]: Started cri-containerd-5db6c0c9b41c98cfad3c6211f30525bd6408de4bd0168b512d9b7b68f2541eca.scope - libcontainer container 5db6c0c9b41c98cfad3c6211f30525bd6408de4bd0168b512d9b7b68f2541eca. Dec 13 01:48:09.331115 containerd[1449]: time="2024-12-13T01:48:09.329044165Z" level=info msg="StartContainer for \"5db6c0c9b41c98cfad3c6211f30525bd6408de4bd0168b512d9b7b68f2541eca\" returns successfully" Dec 13 01:48:11.105696 update_engine[1429]: I20241213 01:48:11.105626 1429 update_attempter.cc:509] Updating boot flags... Dec 13 01:48:11.126702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2851) Dec 13 01:48:11.179614 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2854) Dec 13 01:48:11.211609 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2854) Dec 13 01:48:11.721172 kubelet[2458]: E1213 01:48:11.721138 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:11.735080 kubelet[2458]: I1213 01:48:11.734877 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-t2x8d" podStartSLOduration=3.556795158 podStartE2EDuration="5.73486259s" podCreationTimestamp="2024-12-13 01:48:06 +0000 UTC" firstStartedPulling="2024-12-13 01:48:07.084562667 +0000 UTC m=+7.401208723" lastFinishedPulling="2024-12-13 01:48:09.2626301 +0000 UTC m=+9.579276155" observedRunningTime="2024-12-13 01:48:09.815575376 +0000 UTC m=+10.132221472" watchObservedRunningTime="2024-12-13 01:48:11.73486259 +0000 UTC m=+12.051508645" Dec 13 01:48:11.805965 kubelet[2458]: E1213 01:48:11.805905 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:11.811154 kubelet[2458]: E1213 01:48:11.811115 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:13.373855 systemd[1]: Created slice kubepods-besteffort-pod0906965c_7a75_4f28_ae00_ac58faeb87ca.slice - libcontainer container kubepods-besteffort-pod0906965c_7a75_4f28_ae00_ac58faeb87ca.slice. Dec 13 01:48:13.379788 systemd[1]: Created slice kubepods-besteffort-podf5c4f9a7_8026_4197_ae6a_92514ea9c87f.slice - libcontainer container kubepods-besteffort-podf5c4f9a7_8026_4197_ae6a_92514ea9c87f.slice. Dec 13 01:48:13.405301 kubelet[2458]: E1213 01:48:13.405197 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tv6tv" podUID="1079435b-e60b-443f-932d-e02d21e8e429" Dec 13 01:48:13.442922 kubelet[2458]: I1213 01:48:13.442869 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-var-lib-calico\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.442922 kubelet[2458]: I1213 01:48:13.442918 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-policysync\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443095 kubelet[2458]: I1213 01:48:13.442935 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-var-run-calico\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443095 kubelet[2458]: I1213 01:48:13.442951 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1079435b-e60b-443f-932d-e02d21e8e429-kubelet-dir\") pod \"csi-node-driver-tv6tv\" (UID: \"1079435b-e60b-443f-932d-e02d21e8e429\") " pod="calico-system/csi-node-driver-tv6tv" Dec 13 01:48:13.443095 kubelet[2458]: I1213 01:48:13.442969 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-xtables-lock\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443095 kubelet[2458]: I1213 01:48:13.442984 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-cni-log-dir\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443095 kubelet[2458]: I1213 01:48:13.443001 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1079435b-e60b-443f-932d-e02d21e8e429-socket-dir\") pod \"csi-node-driver-tv6tv\" (UID: \"1079435b-e60b-443f-932d-e02d21e8e429\") " pod="calico-system/csi-node-driver-tv6tv" Dec 13 01:48:13.443205 kubelet[2458]: I1213 01:48:13.443020 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0906965c-7a75-4f28-ae00-ac58faeb87ca-node-certs\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443205 kubelet[2458]: I1213 01:48:13.443035 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-cni-net-dir\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443205 kubelet[2458]: I1213 01:48:13.443050 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0906965c-7a75-4f28-ae00-ac58faeb87ca-tigera-ca-bundle\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443205 kubelet[2458]: I1213 01:48:13.443064 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-cni-bin-dir\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443205 kubelet[2458]: I1213 01:48:13.443079 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1079435b-e60b-443f-932d-e02d21e8e429-varrun\") pod \"csi-node-driver-tv6tv\" (UID: \"1079435b-e60b-443f-932d-e02d21e8e429\") " pod="calico-system/csi-node-driver-tv6tv" Dec 13 01:48:13.443331 kubelet[2458]: I1213 01:48:13.443095 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f5c4f9a7-8026-4197-ae6a-92514ea9c87f-typha-certs\") pod \"calico-typha-69fdc96f68-pl2tv\" (UID: \"f5c4f9a7-8026-4197-ae6a-92514ea9c87f\") " pod="calico-system/calico-typha-69fdc96f68-pl2tv" Dec 13 01:48:13.443331 kubelet[2458]: I1213 01:48:13.443112 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-lib-modules\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443331 kubelet[2458]: I1213 01:48:13.443126 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1079435b-e60b-443f-932d-e02d21e8e429-registration-dir\") pod \"csi-node-driver-tv6tv\" (UID: \"1079435b-e60b-443f-932d-e02d21e8e429\") " pod="calico-system/csi-node-driver-tv6tv" Dec 13 01:48:13.443331 kubelet[2458]: I1213 01:48:13.443145 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5c4f9a7-8026-4197-ae6a-92514ea9c87f-tigera-ca-bundle\") pod \"calico-typha-69fdc96f68-pl2tv\" (UID: \"f5c4f9a7-8026-4197-ae6a-92514ea9c87f\") " pod="calico-system/calico-typha-69fdc96f68-pl2tv" Dec 13 01:48:13.443331 kubelet[2458]: I1213 01:48:13.443161 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq4mx\" (UniqueName: \"kubernetes.io/projected/f5c4f9a7-8026-4197-ae6a-92514ea9c87f-kube-api-access-vq4mx\") pod \"calico-typha-69fdc96f68-pl2tv\" (UID: \"f5c4f9a7-8026-4197-ae6a-92514ea9c87f\") " pod="calico-system/calico-typha-69fdc96f68-pl2tv" Dec 13 01:48:13.443444 kubelet[2458]: I1213 01:48:13.443177 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7k9\" (UniqueName: \"kubernetes.io/projected/0906965c-7a75-4f28-ae00-ac58faeb87ca-kube-api-access-9b7k9\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443444 kubelet[2458]: I1213 01:48:13.443192 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0906965c-7a75-4f28-ae00-ac58faeb87ca-flexvol-driver-host\") pod \"calico-node-xzlkt\" (UID: \"0906965c-7a75-4f28-ae00-ac58faeb87ca\") " pod="calico-system/calico-node-xzlkt" Dec 13 01:48:13.443444 kubelet[2458]: I1213 01:48:13.443207 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6gd\" (UniqueName: \"kubernetes.io/projected/1079435b-e60b-443f-932d-e02d21e8e429-kube-api-access-9k6gd\") pod \"csi-node-driver-tv6tv\" (UID: \"1079435b-e60b-443f-932d-e02d21e8e429\") " pod="calico-system/csi-node-driver-tv6tv" Dec 13 01:48:13.552630 kubelet[2458]: E1213 01:48:13.550782 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.552630 kubelet[2458]: W1213 01:48:13.550811 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.554886 kubelet[2458]: E1213 01:48:13.554867 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.555021 kubelet[2458]: W1213 01:48:13.555007 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.555236 kubelet[2458]: E1213 01:48:13.555213 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.556179 kubelet[2458]: E1213 01:48:13.555815 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.556179 kubelet[2458]: W1213 01:48:13.555901 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.556320 kubelet[2458]: E1213 01:48:13.556191 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.556561 kubelet[2458]: E1213 01:48:13.556487 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.556561 kubelet[2458]: W1213 01:48:13.556513 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.556561 kubelet[2458]: E1213 01:48:13.556523 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.556697 kubelet[2458]: E1213 01:48:13.556635 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.560096 kubelet[2458]: E1213 01:48:13.560071 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.560096 kubelet[2458]: W1213 01:48:13.560089 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.560185 kubelet[2458]: E1213 01:48:13.560107 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.560339 kubelet[2458]: E1213 01:48:13.560314 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.560339 kubelet[2458]: W1213 01:48:13.560326 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.560455 kubelet[2458]: E1213 01:48:13.560371 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.560860 kubelet[2458]: E1213 01:48:13.560838 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.560860 kubelet[2458]: W1213 01:48:13.560855 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.560944 kubelet[2458]: E1213 01:48:13.560873 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.561813 kubelet[2458]: E1213 01:48:13.561792 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.561879 kubelet[2458]: W1213 01:48:13.561814 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.561879 kubelet[2458]: E1213 01:48:13.561830 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.571624 kubelet[2458]: E1213 01:48:13.571218 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.571624 kubelet[2458]: W1213 01:48:13.571239 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.571624 kubelet[2458]: E1213 01:48:13.571254 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.575396 kubelet[2458]: E1213 01:48:13.575377 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:13.575505 kubelet[2458]: W1213 01:48:13.575490 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:13.575572 kubelet[2458]: E1213 01:48:13.575561 2458 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:13.678008 kubelet[2458]: E1213 01:48:13.677906 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:13.681102 containerd[1449]: time="2024-12-13T01:48:13.681069147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xzlkt,Uid:0906965c-7a75-4f28-ae00-ac58faeb87ca,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:13.683440 kubelet[2458]: E1213 01:48:13.683353 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:13.683912 containerd[1449]: time="2024-12-13T01:48:13.683751505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69fdc96f68-pl2tv,Uid:f5c4f9a7-8026-4197-ae6a-92514ea9c87f,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:13.724480 containerd[1449]: time="2024-12-13T01:48:13.724401923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:13.724644 containerd[1449]: time="2024-12-13T01:48:13.724464100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:13.724644 containerd[1449]: time="2024-12-13T01:48:13.724555566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:13.724725 containerd[1449]: time="2024-12-13T01:48:13.724680281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:13.731775 containerd[1449]: time="2024-12-13T01:48:13.731669818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:13.731775 containerd[1449]: time="2024-12-13T01:48:13.731730915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:13.731775 containerd[1449]: time="2024-12-13T01:48:13.731742639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:13.732021 containerd[1449]: time="2024-12-13T01:48:13.731822581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:13.745776 systemd[1]: Started cri-containerd-360f2629fb74bd7f9a77fa548613eed7eadda46fcf23ec2cbe5b3a1ae55384e3.scope - libcontainer container 360f2629fb74bd7f9a77fa548613eed7eadda46fcf23ec2cbe5b3a1ae55384e3. Dec 13 01:48:13.753026 systemd[1]: Started cri-containerd-50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d.scope - libcontainer container 50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d. Dec 13 01:48:13.809874 containerd[1449]: time="2024-12-13T01:48:13.809626627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xzlkt,Uid:0906965c-7a75-4f28-ae00-ac58faeb87ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d\"" Dec 13 01:48:13.813705 kubelet[2458]: E1213 01:48:13.813665 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:13.816314 containerd[1449]: time="2024-12-13T01:48:13.816283350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:48:13.823772 containerd[1449]: time="2024-12-13T01:48:13.823644272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69fdc96f68-pl2tv,Uid:f5c4f9a7-8026-4197-ae6a-92514ea9c87f,Namespace:calico-system,Attempt:0,} returns sandbox id \"360f2629fb74bd7f9a77fa548613eed7eadda46fcf23ec2cbe5b3a1ae55384e3\"" Dec 13 01:48:13.825490 kubelet[2458]: E1213 01:48:13.825461 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:14.763596 kubelet[2458]: E1213 01:48:14.763532 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tv6tv" podUID="1079435b-e60b-443f-932d-e02d21e8e429" Dec 13 01:48:14.823898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956998747.mount: Deactivated successfully. Dec 13 01:48:14.896805 containerd[1449]: time="2024-12-13T01:48:14.896307787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:14.896805 containerd[1449]: time="2024-12-13T01:48:14.896729460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Dec 13 01:48:14.897516 containerd[1449]: time="2024-12-13T01:48:14.897460577Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:14.899473 containerd[1449]: time="2024-12-13T01:48:14.899384096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:14.900297 containerd[1449]: time="2024-12-13T01:48:14.900259052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.08393441s" Dec 13 01:48:14.900339 containerd[1449]: time="2024-12-13T01:48:14.900297622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:48:14.902629 containerd[1449]: time="2024-12-13T01:48:14.902595161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:48:14.904764 containerd[1449]: time="2024-12-13T01:48:14.904729177Z" level=info msg="CreateContainer within sandbox \"50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:48:14.942077 containerd[1449]: time="2024-12-13T01:48:14.942028630Z" level=info msg="CreateContainer within sandbox \"50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f56384d6ff5f8f02b1fa45a265f38f8aadb2ddcf97ed24ce8a278ca759f65fe5\"" Dec 13 01:48:14.942793 containerd[1449]: time="2024-12-13T01:48:14.942760227Z" level=info msg="StartContainer for \"f56384d6ff5f8f02b1fa45a265f38f8aadb2ddcf97ed24ce8a278ca759f65fe5\"" Dec 13 01:48:14.971831 systemd[1]: Started cri-containerd-f56384d6ff5f8f02b1fa45a265f38f8aadb2ddcf97ed24ce8a278ca759f65fe5.scope - libcontainer container f56384d6ff5f8f02b1fa45a265f38f8aadb2ddcf97ed24ce8a278ca759f65fe5. Dec 13 01:48:15.003697 containerd[1449]: time="2024-12-13T01:48:15.003066562Z" level=info msg="StartContainer for \"f56384d6ff5f8f02b1fa45a265f38f8aadb2ddcf97ed24ce8a278ca759f65fe5\" returns successfully" Dec 13 01:48:15.061515 systemd[1]: cri-containerd-f56384d6ff5f8f02b1fa45a265f38f8aadb2ddcf97ed24ce8a278ca759f65fe5.scope: Deactivated successfully. Dec 13 01:48:15.112928 containerd[1449]: time="2024-12-13T01:48:15.109387699Z" level=info msg="shim disconnected" id=f56384d6ff5f8f02b1fa45a265f38f8aadb2ddcf97ed24ce8a278ca759f65fe5 namespace=k8s.io Dec 13 01:48:15.112928 containerd[1449]: time="2024-12-13T01:48:15.112921288Z" level=warning msg="cleaning up after shim disconnected" id=f56384d6ff5f8f02b1fa45a265f38f8aadb2ddcf97ed24ce8a278ca759f65fe5 namespace=k8s.io Dec 13 01:48:15.112928 containerd[1449]: time="2024-12-13T01:48:15.112933491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:48:15.832513 kubelet[2458]: E1213 01:48:15.832481 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:16.277662 containerd[1449]: time="2024-12-13T01:48:16.277615141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:16.278656 containerd[1449]: time="2024-12-13T01:48:16.278624108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Dec 13 01:48:16.279590 containerd[1449]: time="2024-12-13T01:48:16.279535572Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:16.281653 containerd[1449]: time="2024-12-13T01:48:16.281623724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:16.282757 containerd[1449]: time="2024-12-13T01:48:16.282728755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.380095304s" Dec 13 01:48:16.282928 containerd[1449]: time="2024-12-13T01:48:16.282837622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:48:16.283829 containerd[1449]: time="2024-12-13T01:48:16.283807140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:48:16.293640 containerd[1449]: time="2024-12-13T01:48:16.293349682Z" level=info msg="CreateContainer within sandbox \"360f2629fb74bd7f9a77fa548613eed7eadda46fcf23ec2cbe5b3a1ae55384e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:48:16.314593 containerd[1449]: time="2024-12-13T01:48:16.314520757Z" level=info msg="CreateContainer within sandbox \"360f2629fb74bd7f9a77fa548613eed7eadda46fcf23ec2cbe5b3a1ae55384e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6c25c0fd39ac6f22c9b50d5d309c137470ac150cb1349f4c505e14c548457152\"" Dec 13 01:48:16.315698 containerd[1449]: time="2024-12-13T01:48:16.315672879Z" level=info msg="StartContainer for \"6c25c0fd39ac6f22c9b50d5d309c137470ac150cb1349f4c505e14c548457152\"" Dec 13 01:48:16.340736 systemd[1]: Started cri-containerd-6c25c0fd39ac6f22c9b50d5d309c137470ac150cb1349f4c505e14c548457152.scope - libcontainer container 6c25c0fd39ac6f22c9b50d5d309c137470ac150cb1349f4c505e14c548457152. Dec 13 01:48:16.379791 containerd[1449]: time="2024-12-13T01:48:16.379744041Z" level=info msg="StartContainer for \"6c25c0fd39ac6f22c9b50d5d309c137470ac150cb1349f4c505e14c548457152\" returns successfully" Dec 13 01:48:16.763715 kubelet[2458]: E1213 01:48:16.763674 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tv6tv" podUID="1079435b-e60b-443f-932d-e02d21e8e429" Dec 13 01:48:16.838602 kubelet[2458]: E1213 01:48:16.838213 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:17.837670 kubelet[2458]: I1213 01:48:17.837620 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:17.837944 kubelet[2458]: E1213 01:48:17.837927 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:18.763850 kubelet[2458]: E1213 01:48:18.763453 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tv6tv" podUID="1079435b-e60b-443f-932d-e02d21e8e429" Dec 13 01:48:20.617017 containerd[1449]: time="2024-12-13T01:48:20.616966090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:20.617547 containerd[1449]: time="2024-12-13T01:48:20.617513883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:48:20.618296 containerd[1449]: time="2024-12-13T01:48:20.618260836Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:20.620611 containerd[1449]: time="2024-12-13T01:48:20.620567550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:20.621322 containerd[1449]: time="2024-12-13T01:48:20.621159792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.337321605s" Dec 13 01:48:20.621322 containerd[1449]: time="2024-12-13T01:48:20.621186958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:48:20.624551 containerd[1449]: time="2024-12-13T01:48:20.624523643Z" level=info msg="CreateContainer within sandbox \"50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:48:20.634545 containerd[1449]: time="2024-12-13T01:48:20.634498173Z" level=info msg="CreateContainer within sandbox \"50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3\"" Dec 13 01:48:20.635605 containerd[1449]: time="2024-12-13T01:48:20.635563552Z" level=info msg="StartContainer for \"830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3\"" Dec 13 01:48:20.671738 systemd[1]: Started cri-containerd-830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3.scope - libcontainer container 830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3. Dec 13 01:48:20.700026 containerd[1449]: time="2024-12-13T01:48:20.699988391Z" level=info msg="StartContainer for \"830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3\" returns successfully" Dec 13 01:48:20.763903 kubelet[2458]: E1213 01:48:20.763852 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tv6tv" podUID="1079435b-e60b-443f-932d-e02d21e8e429" Dec 13 01:48:20.870680 kubelet[2458]: E1213 01:48:20.869113 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:20.885094 kubelet[2458]: I1213 01:48:20.884828 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69fdc96f68-pl2tv" podStartSLOduration=5.427621017 podStartE2EDuration="7.884797129s" podCreationTimestamp="2024-12-13 01:48:13 +0000 UTC" firstStartedPulling="2024-12-13 01:48:13.82639625 +0000 UTC m=+14.143042265" lastFinishedPulling="2024-12-13 01:48:16.283572322 +0000 UTC m=+16.600218377" observedRunningTime="2024-12-13 01:48:16.857910136 +0000 UTC m=+17.174556231" watchObservedRunningTime="2024-12-13 01:48:20.884797129 +0000 UTC m=+21.201443144" Dec 13 01:48:21.173362 systemd[1]: cri-containerd-830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3.scope: Deactivated successfully. Dec 13 01:48:21.192306 kubelet[2458]: I1213 01:48:21.190862 2458 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:48:21.241391 systemd[1]: Created slice kubepods-besteffort-pod15fcbc53_768f_4fca_83b6_c146a5c50cc1.slice - libcontainer container kubepods-besteffort-pod15fcbc53_768f_4fca_83b6_c146a5c50cc1.slice. Dec 13 01:48:21.246228 systemd[1]: Created slice kubepods-besteffort-podc2e2bdd9_b460_49cb_90c4_8e7578a0674d.slice - libcontainer container kubepods-besteffort-podc2e2bdd9_b460_49cb_90c4_8e7578a0674d.slice. Dec 13 01:48:21.251131 systemd[1]: Created slice kubepods-besteffort-pod1e6eb865_39dc_4ee5_9cdc_f699e1d6b8f5.slice - libcontainer container kubepods-besteffort-pod1e6eb865_39dc_4ee5_9cdc_f699e1d6b8f5.slice. Dec 13 01:48:21.256096 systemd[1]: Created slice kubepods-burstable-pod5ccb309b_ed47_4e0c_ae45_cf7bfdd924c2.slice - libcontainer container kubepods-burstable-pod5ccb309b_ed47_4e0c_ae45_cf7bfdd924c2.slice. Dec 13 01:48:21.261787 systemd[1]: Created slice kubepods-burstable-pod1e0c6640_314d_45ec_b91c_da2f72cfd50a.slice - libcontainer container kubepods-burstable-pod1e0c6640_314d_45ec_b91c_da2f72cfd50a.slice. Dec 13 01:48:21.330416 containerd[1449]: time="2024-12-13T01:48:21.330189840Z" level=info msg="shim disconnected" id=830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3 namespace=k8s.io Dec 13 01:48:21.330416 containerd[1449]: time="2024-12-13T01:48:21.330246131Z" level=warning msg="cleaning up after shim disconnected" id=830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3 namespace=k8s.io Dec 13 01:48:21.330416 containerd[1449]: time="2024-12-13T01:48:21.330254533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:48:21.416636 kubelet[2458]: I1213 01:48:21.416524 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmdm8\" (UniqueName: \"kubernetes.io/projected/15fcbc53-768f-4fca-83b6-c146a5c50cc1-kube-api-access-mmdm8\") pod \"calico-kube-controllers-5d4845d985-cnl5g\" (UID: \"15fcbc53-768f-4fca-83b6-c146a5c50cc1\") " pod="calico-system/calico-kube-controllers-5d4845d985-cnl5g" Dec 13 01:48:21.416636 kubelet[2458]: I1213 01:48:21.416577 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gmwp\" (UniqueName: \"kubernetes.io/projected/c2e2bdd9-b460-49cb-90c4-8e7578a0674d-kube-api-access-9gmwp\") pod \"calico-apiserver-568767dfbd-qv8xv\" (UID: \"c2e2bdd9-b460-49cb-90c4-8e7578a0674d\") " pod="calico-apiserver/calico-apiserver-568767dfbd-qv8xv" Dec 13 01:48:21.416636 kubelet[2458]: I1213 01:48:21.416617 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcg9\" (UniqueName: \"kubernetes.io/projected/1e0c6640-314d-45ec-b91c-da2f72cfd50a-kube-api-access-kmcg9\") pod \"coredns-6f6b679f8f-hwhh6\" (UID: \"1e0c6640-314d-45ec-b91c-da2f72cfd50a\") " pod="kube-system/coredns-6f6b679f8f-hwhh6" Dec 13 01:48:21.416636 kubelet[2458]: I1213 01:48:21.416641 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15fcbc53-768f-4fca-83b6-c146a5c50cc1-tigera-ca-bundle\") pod \"calico-kube-controllers-5d4845d985-cnl5g\" (UID: \"15fcbc53-768f-4fca-83b6-c146a5c50cc1\") " pod="calico-system/calico-kube-controllers-5d4845d985-cnl5g" Dec 13 01:48:21.416988 kubelet[2458]: I1213 01:48:21.416660 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c2e2bdd9-b460-49cb-90c4-8e7578a0674d-calico-apiserver-certs\") pod \"calico-apiserver-568767dfbd-qv8xv\" (UID: \"c2e2bdd9-b460-49cb-90c4-8e7578a0674d\") " pod="calico-apiserver/calico-apiserver-568767dfbd-qv8xv" Dec 13 01:48:21.416988 kubelet[2458]: I1213 01:48:21.416679 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6mhn\" (UniqueName: \"kubernetes.io/projected/5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2-kube-api-access-j6mhn\") pod \"coredns-6f6b679f8f-ccdvt\" (UID: \"5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2\") " pod="kube-system/coredns-6f6b679f8f-ccdvt" Dec 13 01:48:21.416988 kubelet[2458]: I1213 01:48:21.416720 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e0c6640-314d-45ec-b91c-da2f72cfd50a-config-volume\") pod \"coredns-6f6b679f8f-hwhh6\" (UID: \"1e0c6640-314d-45ec-b91c-da2f72cfd50a\") " pod="kube-system/coredns-6f6b679f8f-hwhh6" Dec 13 01:48:21.416988 kubelet[2458]: I1213 01:48:21.416737 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5-calico-apiserver-certs\") pod \"calico-apiserver-568767dfbd-c99fj\" (UID: \"1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5\") " pod="calico-apiserver/calico-apiserver-568767dfbd-c99fj" Dec 13 01:48:21.416988 kubelet[2458]: I1213 01:48:21.416755 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsm2h\" (UniqueName: \"kubernetes.io/projected/1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5-kube-api-access-wsm2h\") pod \"calico-apiserver-568767dfbd-c99fj\" (UID: \"1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5\") " pod="calico-apiserver/calico-apiserver-568767dfbd-c99fj" Dec 13 01:48:21.417112 kubelet[2458]: I1213 01:48:21.416791 2458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2-config-volume\") pod \"coredns-6f6b679f8f-ccdvt\" (UID: \"5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2\") " pod="kube-system/coredns-6f6b679f8f-ccdvt" Dec 13 01:48:21.545516 containerd[1449]: time="2024-12-13T01:48:21.545398283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d4845d985-cnl5g,Uid:15fcbc53-768f-4fca-83b6-c146a5c50cc1,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:21.551620 containerd[1449]: time="2024-12-13T01:48:21.549941978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568767dfbd-qv8xv,Uid:c2e2bdd9-b460-49cb-90c4-8e7578a0674d,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:48:21.554449 containerd[1449]: time="2024-12-13T01:48:21.554405338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568767dfbd-c99fj,Uid:1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:48:21.560892 kubelet[2458]: E1213 01:48:21.560863 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:21.561986 containerd[1449]: time="2024-12-13T01:48:21.561566229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ccdvt,Uid:5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:21.564487 kubelet[2458]: E1213 01:48:21.564447 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:21.566339 containerd[1449]: time="2024-12-13T01:48:21.565976178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hwhh6,Uid:1e0c6640-314d-45ec-b91c-da2f72cfd50a,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:21.658277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-830f3375d318f616956d465946bd55537ea2b06ac8e4a1972e7832b824b6a0f3-rootfs.mount: Deactivated successfully. Dec 13 01:48:21.905772 kubelet[2458]: E1213 01:48:21.905208 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:21.913416 containerd[1449]: time="2024-12-13T01:48:21.911455328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:48:22.045471 containerd[1449]: time="2024-12-13T01:48:22.045348474Z" level=error msg="Failed to destroy network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.048788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236-shm.mount: Deactivated successfully. Dec 13 01:48:22.051273 containerd[1449]: time="2024-12-13T01:48:22.051228586Z" level=error msg="encountered an error cleaning up failed sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.051388 containerd[1449]: time="2024-12-13T01:48:22.051295879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hwhh6,Uid:1e0c6640-314d-45ec-b91c-da2f72cfd50a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.053483 kubelet[2458]: E1213 01:48:22.053122 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.053483 kubelet[2458]: E1213 01:48:22.053265 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hwhh6" Dec 13 01:48:22.053483 kubelet[2458]: E1213 01:48:22.053351 2458 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hwhh6" Dec 13 01:48:22.053671 kubelet[2458]: E1213 01:48:22.053414 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hwhh6_kube-system(1e0c6640-314d-45ec-b91c-da2f72cfd50a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hwhh6_kube-system(1e0c6640-314d-45ec-b91c-da2f72cfd50a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hwhh6" podUID="1e0c6640-314d-45ec-b91c-da2f72cfd50a" Dec 13 01:48:22.053908 containerd[1449]: time="2024-12-13T01:48:22.053878967Z" level=error msg="Failed to destroy network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.054571 containerd[1449]: time="2024-12-13T01:48:22.054532491Z" level=error msg="encountered an error cleaning up failed sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.054824 containerd[1449]: time="2024-12-13T01:48:22.054796181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568767dfbd-c99fj,Uid:1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.055532 kubelet[2458]: E1213 01:48:22.055474 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.055532 kubelet[2458]: E1213 01:48:22.055527 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568767dfbd-c99fj" Dec 13 01:48:22.055690 kubelet[2458]: E1213 01:48:22.055543 2458 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568767dfbd-c99fj" Dec 13 01:48:22.055690 kubelet[2458]: E1213 01:48:22.055636 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568767dfbd-c99fj_calico-apiserver(1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568767dfbd-c99fj_calico-apiserver(1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568767dfbd-c99fj" podUID="1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5" Dec 13 01:48:22.064028 containerd[1449]: time="2024-12-13T01:48:22.063972116Z" level=error msg="Failed to destroy network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.064100 containerd[1449]: time="2024-12-13T01:48:22.063972036Z" level=error msg="Failed to destroy network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.064542 containerd[1449]: time="2024-12-13T01:48:22.064315181Z" level=error msg="encountered an error cleaning up failed sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.064542 containerd[1449]: time="2024-12-13T01:48:22.064351868Z" level=error msg="encountered an error cleaning up failed sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.064542 containerd[1449]: time="2024-12-13T01:48:22.064419080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568767dfbd-qv8xv,Uid:c2e2bdd9-b460-49cb-90c4-8e7578a0674d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.064542 containerd[1449]: time="2024-12-13T01:48:22.064366310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d4845d985-cnl5g,Uid:15fcbc53-768f-4fca-83b6-c146a5c50cc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.065791 kubelet[2458]: E1213 01:48:22.064627 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.065791 kubelet[2458]: E1213 01:48:22.064638 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.065791 kubelet[2458]: E1213 01:48:22.064664 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d4845d985-cnl5g" Dec 13 01:48:22.065791 kubelet[2458]: E1213 01:48:22.064690 2458 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d4845d985-cnl5g" Dec 13 01:48:22.065889 kubelet[2458]: E1213 01:48:22.064721 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d4845d985-cnl5g_calico-system(15fcbc53-768f-4fca-83b6-c146a5c50cc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d4845d985-cnl5g_calico-system(15fcbc53-768f-4fca-83b6-c146a5c50cc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d4845d985-cnl5g" podUID="15fcbc53-768f-4fca-83b6-c146a5c50cc1" Dec 13 01:48:22.065889 kubelet[2458]: E1213 01:48:22.064675 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568767dfbd-qv8xv" Dec 13 01:48:22.065889 kubelet[2458]: E1213 01:48:22.064758 2458 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568767dfbd-qv8xv" Dec 13 01:48:22.065969 kubelet[2458]: E1213 01:48:22.064793 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568767dfbd-qv8xv_calico-apiserver(c2e2bdd9-b460-49cb-90c4-8e7578a0674d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568767dfbd-qv8xv_calico-apiserver(c2e2bdd9-b460-49cb-90c4-8e7578a0674d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568767dfbd-qv8xv" podUID="c2e2bdd9-b460-49cb-90c4-8e7578a0674d" Dec 13 01:48:22.069099 containerd[1449]: time="2024-12-13T01:48:22.068943496Z" level=error msg="Failed to destroy network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.075109 containerd[1449]: time="2024-12-13T01:48:22.074946511Z" level=error msg="encountered an error cleaning up failed sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.075109 containerd[1449]: time="2024-12-13T01:48:22.075015684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ccdvt,Uid:5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.075269 kubelet[2458]: E1213 01:48:22.075211 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.075306 kubelet[2458]: E1213 01:48:22.075282 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ccdvt" Dec 13 01:48:22.075306 kubelet[2458]: E1213 01:48:22.075299 2458 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ccdvt" Dec 13 01:48:22.075370 kubelet[2458]: E1213 01:48:22.075347 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ccdvt_kube-system(5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ccdvt_kube-system(5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ccdvt" podUID="5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2" Dec 13 01:48:22.637163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c-shm.mount: Deactivated successfully. Dec 13 01:48:22.637257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa-shm.mount: Deactivated successfully. Dec 13 01:48:22.637319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447-shm.mount: Deactivated successfully. Dec 13 01:48:22.637370 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84-shm.mount: Deactivated successfully. Dec 13 01:48:22.769686 systemd[1]: Created slice kubepods-besteffort-pod1079435b_e60b_443f_932d_e02d21e8e429.slice - libcontainer container kubepods-besteffort-pod1079435b_e60b_443f_932d_e02d21e8e429.slice. Dec 13 01:48:22.771986 containerd[1449]: time="2024-12-13T01:48:22.771719949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tv6tv,Uid:1079435b-e60b-443f-932d-e02d21e8e429,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:22.824992 containerd[1449]: time="2024-12-13T01:48:22.824945054Z" level=error msg="Failed to destroy network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.825278 containerd[1449]: time="2024-12-13T01:48:22.825251271Z" level=error msg="encountered an error cleaning up failed sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.825336 containerd[1449]: time="2024-12-13T01:48:22.825312763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tv6tv,Uid:1079435b-e60b-443f-932d-e02d21e8e429,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.825876 kubelet[2458]: E1213 01:48:22.825516 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.825876 kubelet[2458]: E1213 01:48:22.825575 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tv6tv" Dec 13 01:48:22.825876 kubelet[2458]: E1213 01:48:22.825607 2458 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tv6tv" Dec 13 01:48:22.825992 kubelet[2458]: E1213 01:48:22.825648 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tv6tv_calico-system(1079435b-e60b-443f-932d-e02d21e8e429)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tv6tv_calico-system(1079435b-e60b-443f-932d-e02d21e8e429)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tv6tv" podUID="1079435b-e60b-443f-932d-e02d21e8e429" Dec 13 01:48:22.826827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08-shm.mount: Deactivated successfully. Dec 13 01:48:22.909774 kubelet[2458]: I1213 01:48:22.909652 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:22.910879 containerd[1449]: time="2024-12-13T01:48:22.910846417Z" level=info msg="StopPodSandbox for \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\"" Dec 13 01:48:22.911274 containerd[1449]: time="2024-12-13T01:48:22.911054016Z" level=info msg="Ensure that sandbox 6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236 in task-service has been cleanup successfully" Dec 13 01:48:22.912942 kubelet[2458]: I1213 01:48:22.912912 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:48:22.914777 containerd[1449]: time="2024-12-13T01:48:22.914734352Z" level=info msg="StopPodSandbox for \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\"" Dec 13 01:48:22.915077 containerd[1449]: time="2024-12-13T01:48:22.914882941Z" level=info msg="Ensure that sandbox 2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa in task-service has been cleanup successfully" Dec 13 01:48:22.915873 kubelet[2458]: I1213 01:48:22.915845 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:48:22.916561 containerd[1449]: time="2024-12-13T01:48:22.916461559Z" level=info msg="StopPodSandbox for \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\"" Dec 13 01:48:22.916951 containerd[1449]: time="2024-12-13T01:48:22.916630191Z" level=info msg="Ensure that sandbox e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84 in task-service has been cleanup successfully" Dec 13 01:48:22.927172 kubelet[2458]: I1213 01:48:22.926411 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:48:22.927278 containerd[1449]: time="2024-12-13T01:48:22.927051682Z" level=info msg="StopPodSandbox for \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\"" Dec 13 01:48:22.927479 containerd[1449]: time="2024-12-13T01:48:22.927361820Z" level=info msg="Ensure that sandbox 9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08 in task-service has been cleanup successfully" Dec 13 01:48:22.927591 kubelet[2458]: I1213 01:48:22.927510 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:48:22.928201 containerd[1449]: time="2024-12-13T01:48:22.928174494Z" level=info msg="StopPodSandbox for \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\"" Dec 13 01:48:22.928337 containerd[1449]: time="2024-12-13T01:48:22.928319721Z" level=info msg="Ensure that sandbox 00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c in task-service has been cleanup successfully" Dec 13 01:48:22.931303 kubelet[2458]: I1213 01:48:22.930978 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:48:22.932797 containerd[1449]: time="2024-12-13T01:48:22.932766162Z" level=info msg="StopPodSandbox for \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\"" Dec 13 01:48:22.932931 containerd[1449]: time="2024-12-13T01:48:22.932912230Z" level=info msg="Ensure that sandbox f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447 in task-service has been cleanup successfully" Dec 13 01:48:22.975347 containerd[1449]: time="2024-12-13T01:48:22.975296325Z" level=error msg="StopPodSandbox for \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\" failed" error="failed to destroy network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.975593 containerd[1449]: time="2024-12-13T01:48:22.975298965Z" level=error msg="StopPodSandbox for \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\" failed" error="failed to destroy network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.975921 kubelet[2458]: E1213 01:48:22.975751 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:22.975921 kubelet[2458]: E1213 01:48:22.975810 2458 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236"} Dec 13 01:48:22.975921 kubelet[2458]: E1213 01:48:22.975752 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:48:22.975921 kubelet[2458]: E1213 01:48:22.975866 2458 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e0c6640-314d-45ec-b91c-da2f72cfd50a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:22.975921 kubelet[2458]: E1213 01:48:22.975880 2458 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84"} Dec 13 01:48:22.976516 kubelet[2458]: E1213 01:48:22.975888 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e0c6640-314d-45ec-b91c-da2f72cfd50a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hwhh6" podUID="1e0c6640-314d-45ec-b91c-da2f72cfd50a" Dec 13 01:48:22.976516 kubelet[2458]: E1213 01:48:22.975911 2458 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15fcbc53-768f-4fca-83b6-c146a5c50cc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:22.976516 kubelet[2458]: E1213 01:48:22.975932 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15fcbc53-768f-4fca-83b6-c146a5c50cc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d4845d985-cnl5g" podUID="15fcbc53-768f-4fca-83b6-c146a5c50cc1" Dec 13 01:48:22.978826 containerd[1449]: time="2024-12-13T01:48:22.978781584Z" level=error msg="StopPodSandbox for \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\" failed" error="failed to destroy network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.979111 kubelet[2458]: E1213 01:48:22.978952 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:48:22.979111 kubelet[2458]: E1213 01:48:22.978992 2458 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa"} Dec 13 01:48:22.979111 kubelet[2458]: E1213 01:48:22.979018 2458 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:22.979111 kubelet[2458]: E1213 01:48:22.979039 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ccdvt" podUID="5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2" Dec 13 01:48:22.987782 containerd[1449]: time="2024-12-13T01:48:22.987726355Z" level=error msg="StopPodSandbox for \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\" failed" error="failed to destroy network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.989623 kubelet[2458]: E1213 01:48:22.987949 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:48:22.989623 kubelet[2458]: E1213 01:48:22.987991 2458 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c"} Dec 13 01:48:22.989623 kubelet[2458]: E1213 01:48:22.988030 2458 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:22.989623 kubelet[2458]: E1213 01:48:22.988049 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568767dfbd-c99fj" podUID="1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5" Dec 13 01:48:22.990744 containerd[1449]: time="2024-12-13T01:48:22.990682554Z" level=error msg="StopPodSandbox for \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\" failed" error="failed to destroy network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.991530 kubelet[2458]: E1213 01:48:22.991421 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:48:22.991530 kubelet[2458]: E1213 01:48:22.991459 2458 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08"} Dec 13 01:48:22.991530 kubelet[2458]: E1213 01:48:22.991483 2458 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1079435b-e60b-443f-932d-e02d21e8e429\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:22.991530 kubelet[2458]: E1213 01:48:22.991499 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1079435b-e60b-443f-932d-e02d21e8e429\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tv6tv" podUID="1079435b-e60b-443f-932d-e02d21e8e429" Dec 13 01:48:22.997699 containerd[1449]: time="2024-12-13T01:48:22.997656073Z" level=error msg="StopPodSandbox for \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\" failed" error="failed to destroy network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:22.998012 kubelet[2458]: E1213 01:48:22.997977 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:48:22.998063 kubelet[2458]: E1213 01:48:22.998013 2458 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447"} Dec 13 01:48:22.998063 kubelet[2458]: E1213 01:48:22.998039 2458 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2e2bdd9-b460-49cb-90c4-8e7578a0674d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:22.998130 kubelet[2458]: E1213 01:48:22.998057 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2e2bdd9-b460-49cb-90c4-8e7578a0674d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568767dfbd-qv8xv" podUID="c2e2bdd9-b460-49cb-90c4-8e7578a0674d" Dec 13 01:48:25.797300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275564453.mount: Deactivated successfully. Dec 13 01:48:26.033434 containerd[1449]: time="2024-12-13T01:48:26.033186724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:26.033803 containerd[1449]: time="2024-12-13T01:48:26.033739653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:48:26.034515 containerd[1449]: time="2024-12-13T01:48:26.034477453Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:26.040980 containerd[1449]: time="2024-12-13T01:48:26.036509182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:26.041279 containerd[1449]: time="2024-12-13T01:48:26.037185251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.125689955s" Dec 13 01:48:26.041279 containerd[1449]: time="2024-12-13T01:48:26.041188380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:48:26.052046 containerd[1449]: time="2024-12-13T01:48:26.051953564Z" level=info msg="CreateContainer within sandbox \"50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:48:26.065898 containerd[1449]: time="2024-12-13T01:48:26.065852056Z" level=info msg="CreateContainer within sandbox \"50b5f431117afae110a4a29e58f76373dad849e2f6943b66568d54b30592220d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eb2c7e08829a78aa6d31101b3f33aa2e7015284c6de298a5fcd17118f5e2e07a\"" Dec 13 01:48:26.066290 containerd[1449]: time="2024-12-13T01:48:26.066268363Z" level=info msg="StartContainer for \"eb2c7e08829a78aa6d31101b3f33aa2e7015284c6de298a5fcd17118f5e2e07a\"" Dec 13 01:48:26.114753 systemd[1]: Started cri-containerd-eb2c7e08829a78aa6d31101b3f33aa2e7015284c6de298a5fcd17118f5e2e07a.scope - libcontainer container eb2c7e08829a78aa6d31101b3f33aa2e7015284c6de298a5fcd17118f5e2e07a. Dec 13 01:48:26.146162 containerd[1449]: time="2024-12-13T01:48:26.143513238Z" level=info msg="StartContainer for \"eb2c7e08829a78aa6d31101b3f33aa2e7015284c6de298a5fcd17118f5e2e07a\" returns successfully" Dec 13 01:48:26.223136 kubelet[2458]: I1213 01:48:26.222773 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:26.224598 kubelet[2458]: E1213 01:48:26.223153 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:26.334976 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:48:26.335107 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:48:26.943032 kubelet[2458]: E1213 01:48:26.942979 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:26.945400 kubelet[2458]: E1213 01:48:26.945366 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:26.958424 kubelet[2458]: I1213 01:48:26.958308 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xzlkt" podStartSLOduration=1.732418214 podStartE2EDuration="13.958291483s" podCreationTimestamp="2024-12-13 01:48:13 +0000 UTC" firstStartedPulling="2024-12-13 01:48:13.815942813 +0000 UTC m=+14.132588828" lastFinishedPulling="2024-12-13 01:48:26.041816042 +0000 UTC m=+26.358462097" observedRunningTime="2024-12-13 01:48:26.957546122 +0000 UTC m=+27.274192297" watchObservedRunningTime="2024-12-13 01:48:26.958291483 +0000 UTC m=+27.274937618" Dec 13 01:48:27.755638 kernel: bpftool[3717]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:48:27.921362 systemd-networkd[1379]: vxlan.calico: Link UP Dec 13 01:48:27.921370 systemd-networkd[1379]: vxlan.calico: Gained carrier Dec 13 01:48:27.946890 kubelet[2458]: I1213 01:48:27.946836 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:27.947332 kubelet[2458]: E1213 01:48:27.947241 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:29.344805 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Dec 13 01:48:34.766170 containerd[1449]: time="2024-12-13T01:48:34.765961536Z" level=info msg="StopPodSandbox for \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\"" Dec 13 01:48:34.774385 containerd[1449]: time="2024-12-13T01:48:34.771618243Z" level=info msg="StopPodSandbox for \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\"" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.885 [INFO][3842] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.885 [INFO][3842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" iface="eth0" netns="/var/run/netns/cni-7d5fcc49-130b-96a4-16e1-0056ea6dd44d" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.886 [INFO][3842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" iface="eth0" netns="/var/run/netns/cni-7d5fcc49-130b-96a4-16e1-0056ea6dd44d" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.887 [INFO][3842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" iface="eth0" netns="/var/run/netns/cni-7d5fcc49-130b-96a4-16e1-0056ea6dd44d" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.887 [INFO][3842] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.887 [INFO][3842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.999 [INFO][3858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.999 [INFO][3858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:34.999 [INFO][3858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:35.007 [WARNING][3858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:35.008 [INFO][3858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:35.009 [INFO][3858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:35.012415 containerd[1449]: 2024-12-13 01:48:35.010 [INFO][3842] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:48:35.013706 containerd[1449]: time="2024-12-13T01:48:35.013611553Z" level=info msg="TearDown network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\" successfully" Dec 13 01:48:35.014018 containerd[1449]: time="2024-12-13T01:48:35.013947474Z" level=info msg="StopPodSandbox for \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\" returns successfully" Dec 13 01:48:35.015127 kubelet[2458]: E1213 01:48:35.015098 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:35.017481 containerd[1449]: time="2024-12-13T01:48:35.015707048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ccdvt,Uid:5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2,Namespace:kube-system,Attempt:1,}" Dec 13 01:48:35.016725 systemd[1]: run-netns-cni\x2d7d5fcc49\x2d130b\x2d96a4\x2d16e1\x2d0056ea6dd44d.mount: Deactivated successfully. Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:34.882 [INFO][3843] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:34.882 [INFO][3843] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" iface="eth0" netns="/var/run/netns/cni-665ff772-b67e-d363-4be4-f18d4e9e3f27" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:34.883 [INFO][3843] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" iface="eth0" netns="/var/run/netns/cni-665ff772-b67e-d363-4be4-f18d4e9e3f27" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:34.884 [INFO][3843] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" iface="eth0" netns="/var/run/netns/cni-665ff772-b67e-d363-4be4-f18d4e9e3f27" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:34.884 [INFO][3843] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:34.884 [INFO][3843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:34.999 [INFO][3857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:34.999 [INFO][3857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:35.009 [INFO][3857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:35.021 [WARNING][3857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:35.021 [INFO][3857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:35.023 [INFO][3857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:35.026994 containerd[1449]: 2024-12-13 01:48:35.024 [INFO][3843] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:48:35.027503 containerd[1449]: time="2024-12-13T01:48:35.027115554Z" level=info msg="TearDown network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\" successfully" Dec 13 01:48:35.027503 containerd[1449]: time="2024-12-13T01:48:35.027145277Z" level=info msg="StopPodSandbox for \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\" returns successfully" Dec 13 01:48:35.029473 systemd[1]: run-netns-cni\x2d665ff772\x2db67e\x2dd363\x2d4be4\x2df18d4e9e3f27.mount: Deactivated successfully. Dec 13 01:48:35.036472 containerd[1449]: time="2024-12-13T01:48:35.034890498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568767dfbd-qv8xv,Uid:c2e2bdd9-b460-49cb-90c4-8e7578a0674d,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:48:35.160685 systemd-networkd[1379]: cali292a06fe556: Link UP Dec 13 01:48:35.160905 systemd-networkd[1379]: cali292a06fe556: Gained carrier Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.067 [INFO][3875] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0 coredns-6f6b679f8f- kube-system 5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2 804 0 2024-12-13 01:48:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-ccdvt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali292a06fe556 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ccdvt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ccdvt-" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.068 [INFO][3875] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ccdvt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.102 [INFO][3900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" HandleID="k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.116 [INFO][3900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" HandleID="k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f4b00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-ccdvt", "timestamp":"2024-12-13 01:48:35.102412462 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.116 [INFO][3900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.116 [INFO][3900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.118 [INFO][3900] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.120 [INFO][3900] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.135 [INFO][3900] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.139 [INFO][3900] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.141 [INFO][3900] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.143 [INFO][3900] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.143 [INFO][3900] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.145 [INFO][3900] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.148 [INFO][3900] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.153 [INFO][3900] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.153 [INFO][3900] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" host="localhost" Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.153 [INFO][3900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:35.174686 containerd[1449]: 2024-12-13 01:48:35.153 [INFO][3900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" HandleID="k8s-pod-network.845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.175767 containerd[1449]: 2024-12-13 01:48:35.155 [INFO][3875] cni-plugin/k8s.go 386: Populated endpoint ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ccdvt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-ccdvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali292a06fe556", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:35.175767 containerd[1449]: 2024-12-13 01:48:35.156 [INFO][3875] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ccdvt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.175767 containerd[1449]: 2024-12-13 01:48:35.156 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali292a06fe556 ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ccdvt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.175767 containerd[1449]: 2024-12-13 01:48:35.159 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ccdvt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.175767 containerd[1449]: 2024-12-13 01:48:35.159 [INFO][3875] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ccdvt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd", Pod:"coredns-6f6b679f8f-ccdvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali292a06fe556", MAC:"12:89:3c:0d:c2:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:35.175767 containerd[1449]: 2024-12-13 01:48:35.169 [INFO][3875] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ccdvt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:48:35.213641 containerd[1449]: time="2024-12-13T01:48:35.213332619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:35.213641 containerd[1449]: time="2024-12-13T01:48:35.213606212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:35.213641 containerd[1449]: time="2024-12-13T01:48:35.213619774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:35.213813 containerd[1449]: time="2024-12-13T01:48:35.213690942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:35.240069 systemd[1]: Started cri-containerd-845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd.scope - libcontainer container 845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd. Dec 13 01:48:35.252985 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:35.266185 systemd-networkd[1379]: calicca0b7f8fd1: Link UP Dec 13 01:48:35.266383 systemd-networkd[1379]: calicca0b7f8fd1: Gained carrier Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.092 [INFO][3888] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0 calico-apiserver-568767dfbd- calico-apiserver c2e2bdd9-b460-49cb-90c4-8e7578a0674d 803 0 2024-12-13 01:48:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:568767dfbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-568767dfbd-qv8xv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicca0b7f8fd1 [] []}} ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-qv8xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.092 [INFO][3888] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-qv8xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.121 [INFO][3907] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" HandleID="k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.133 [INFO][3907] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" HandleID="k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ab030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-568767dfbd-qv8xv", "timestamp":"2024-12-13 01:48:35.121566469 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.133 [INFO][3907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.153 [INFO][3907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.153 [INFO][3907] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.221 [INFO][3907] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.228 [INFO][3907] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.241 [INFO][3907] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.243 [INFO][3907] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.245 [INFO][3907] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.245 [INFO][3907] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.247 [INFO][3907] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.252 [INFO][3907] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.260 [INFO][3907] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.260 [INFO][3907] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" host="localhost" Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.261 [INFO][3907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:35.283612 containerd[1449]: 2024-12-13 01:48:35.261 [INFO][3907] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" HandleID="k8s-pod-network.495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.284087 containerd[1449]: 2024-12-13 01:48:35.263 [INFO][3888] cni-plugin/k8s.go 386: Populated endpoint ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-qv8xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0", GenerateName:"calico-apiserver-568767dfbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2e2bdd9-b460-49cb-90c4-8e7578a0674d", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568767dfbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-568767dfbd-qv8xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicca0b7f8fd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:35.284087 containerd[1449]: 2024-12-13 01:48:35.264 [INFO][3888] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-qv8xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.284087 containerd[1449]: 2024-12-13 01:48:35.264 [INFO][3888] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicca0b7f8fd1 ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-qv8xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.284087 containerd[1449]: 2024-12-13 01:48:35.265 [INFO][3888] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-qv8xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.284087 containerd[1449]: 2024-12-13 01:48:35.265 [INFO][3888] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-qv8xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0", GenerateName:"calico-apiserver-568767dfbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2e2bdd9-b460-49cb-90c4-8e7578a0674d", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568767dfbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a", Pod:"calico-apiserver-568767dfbd-qv8xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicca0b7f8fd1", MAC:"fe:2a:e9:07:9e:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:35.284087 containerd[1449]: 2024-12-13 01:48:35.281 [INFO][3888] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-qv8xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:48:35.304223 containerd[1449]: time="2024-12-13T01:48:35.303827774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ccdvt,Uid:5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2,Namespace:kube-system,Attempt:1,} returns sandbox id \"845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd\"" Dec 13 01:48:35.304725 containerd[1449]: time="2024-12-13T01:48:35.304033759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:35.304725 containerd[1449]: time="2024-12-13T01:48:35.304111488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:35.304725 containerd[1449]: time="2024-12-13T01:48:35.304136571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:35.305712 kubelet[2458]: E1213 01:48:35.305461 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:35.306407 containerd[1449]: time="2024-12-13T01:48:35.306102010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:35.307647 containerd[1449]: time="2024-12-13T01:48:35.307534904Z" level=info msg="CreateContainer within sandbox \"845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:48:35.322012 containerd[1449]: time="2024-12-13T01:48:35.321973979Z" level=info msg="CreateContainer within sandbox \"845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dfe6bc22307a2eec8530d21aa0d9f6062785a4a82c8fafb5a70fb037c597ca85\"" Dec 13 01:48:35.323245 containerd[1449]: time="2024-12-13T01:48:35.322408751Z" level=info msg="StartContainer for \"dfe6bc22307a2eec8530d21aa0d9f6062785a4a82c8fafb5a70fb037c597ca85\"" Dec 13 01:48:35.325821 systemd[1]: Started cri-containerd-495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a.scope - libcontainer container 495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a. Dec 13 01:48:35.337976 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:35.359798 systemd[1]: Started cri-containerd-dfe6bc22307a2eec8530d21aa0d9f6062785a4a82c8fafb5a70fb037c597ca85.scope - libcontainer container dfe6bc22307a2eec8530d21aa0d9f6062785a4a82c8fafb5a70fb037c597ca85. Dec 13 01:48:35.368551 containerd[1449]: time="2024-12-13T01:48:35.368509833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568767dfbd-qv8xv,Uid:c2e2bdd9-b460-49cb-90c4-8e7578a0674d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a\"" Dec 13 01:48:35.370281 containerd[1449]: time="2024-12-13T01:48:35.370249324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:48:35.390305 containerd[1449]: time="2024-12-13T01:48:35.390256435Z" level=info msg="StartContainer for \"dfe6bc22307a2eec8530d21aa0d9f6062785a4a82c8fafb5a70fb037c597ca85\" returns successfully" Dec 13 01:48:35.764681 containerd[1449]: time="2024-12-13T01:48:35.764639202Z" level=info msg="StopPodSandbox for \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\"" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.817 [INFO][4082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.817 [INFO][4082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" iface="eth0" netns="/var/run/netns/cni-e94fd5d9-a6ad-bc19-c33f-baf1123413e5" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.818 [INFO][4082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" iface="eth0" netns="/var/run/netns/cni-e94fd5d9-a6ad-bc19-c33f-baf1123413e5" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.818 [INFO][4082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" iface="eth0" netns="/var/run/netns/cni-e94fd5d9-a6ad-bc19-c33f-baf1123413e5" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.818 [INFO][4082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.818 [INFO][4082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.836 [INFO][4090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.836 [INFO][4090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.836 [INFO][4090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.844 [WARNING][4090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.844 [INFO][4090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.846 [INFO][4090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:35.849216 containerd[1449]: 2024-12-13 01:48:35.847 [INFO][4082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:35.850923 containerd[1449]: time="2024-12-13T01:48:35.849343933Z" level=info msg="TearDown network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\" successfully" Dec 13 01:48:35.850923 containerd[1449]: time="2024-12-13T01:48:35.849371096Z" level=info msg="StopPodSandbox for \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\" returns successfully" Dec 13 01:48:35.850923 containerd[1449]: time="2024-12-13T01:48:35.850109666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hwhh6,Uid:1e0c6640-314d-45ec-b91c-da2f72cfd50a,Namespace:kube-system,Attempt:1,}" Dec 13 01:48:35.851102 kubelet[2458]: E1213 01:48:35.849736 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:35.955960 systemd-networkd[1379]: calica6042ecefd: Link UP Dec 13 01:48:35.956393 systemd-networkd[1379]: calica6042ecefd: Gained carrier Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.889 [INFO][4097] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0 coredns-6f6b679f8f- kube-system 1e0c6640-314d-45ec-b91c-da2f72cfd50a 821 0 2024-12-13 01:48:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hwhh6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calica6042ecefd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Namespace="kube-system" Pod="coredns-6f6b679f8f-hwhh6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hwhh6-" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.889 [INFO][4097] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Namespace="kube-system" Pod="coredns-6f6b679f8f-hwhh6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.915 [INFO][4111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" HandleID="k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.925 [INFO][4111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" HandleID="k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f38b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hwhh6", "timestamp":"2024-12-13 01:48:35.915305627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.925 [INFO][4111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.925 [INFO][4111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.925 [INFO][4111] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.927 [INFO][4111] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.931 [INFO][4111] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.935 [INFO][4111] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.937 [INFO][4111] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.939 [INFO][4111] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.939 [INFO][4111] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.942 [INFO][4111] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.946 [INFO][4111] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.950 [INFO][4111] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.950 [INFO][4111] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" host="localhost" Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.950 [INFO][4111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:35.968321 containerd[1449]: 2024-12-13 01:48:35.950 [INFO][4111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" HandleID="k8s-pod-network.e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.970272 containerd[1449]: 2024-12-13 01:48:35.953 [INFO][4097] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Namespace="kube-system" Pod="coredns-6f6b679f8f-hwhh6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1e0c6640-314d-45ec-b91c-da2f72cfd50a", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hwhh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica6042ecefd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:35.970272 containerd[1449]: 2024-12-13 01:48:35.953 [INFO][4097] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Namespace="kube-system" Pod="coredns-6f6b679f8f-hwhh6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.970272 containerd[1449]: 2024-12-13 01:48:35.953 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica6042ecefd ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Namespace="kube-system" Pod="coredns-6f6b679f8f-hwhh6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.970272 containerd[1449]: 2024-12-13 01:48:35.956 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Namespace="kube-system" Pod="coredns-6f6b679f8f-hwhh6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.970272 containerd[1449]: 2024-12-13 01:48:35.956 [INFO][4097] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Namespace="kube-system" Pod="coredns-6f6b679f8f-hwhh6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1e0c6640-314d-45ec-b91c-da2f72cfd50a", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e", Pod:"coredns-6f6b679f8f-hwhh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica6042ecefd", MAC:"d2:77:1d:46:d1:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:35.970272 containerd[1449]: 2024-12-13 01:48:35.965 [INFO][4097] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e" Namespace="kube-system" Pod="coredns-6f6b679f8f-hwhh6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:35.970656 kubelet[2458]: E1213 01:48:35.968823 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:35.981655 kubelet[2458]: I1213 01:48:35.981604 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ccdvt" podStartSLOduration=29.981578639 podStartE2EDuration="29.981578639s" podCreationTimestamp="2024-12-13 01:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:35.980856112 +0000 UTC m=+36.297502167" watchObservedRunningTime="2024-12-13 01:48:35.981578639 +0000 UTC m=+36.298224694" Dec 13 01:48:36.006631 containerd[1449]: time="2024-12-13T01:48:36.005803289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:36.006631 containerd[1449]: time="2024-12-13T01:48:36.005863576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:36.006631 containerd[1449]: time="2024-12-13T01:48:36.005878258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:36.006631 containerd[1449]: time="2024-12-13T01:48:36.005955147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:36.019085 systemd[1]: run-netns-cni\x2de94fd5d9\x2da6ad\x2dbc19\x2dc33f\x2dbaf1123413e5.mount: Deactivated successfully. Dec 13 01:48:36.040782 systemd[1]: Started cri-containerd-e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e.scope - libcontainer container e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e. Dec 13 01:48:36.052297 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:36.069437 containerd[1449]: time="2024-12-13T01:48:36.069373128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hwhh6,Uid:1e0c6640-314d-45ec-b91c-da2f72cfd50a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e\"" Dec 13 01:48:36.070072 kubelet[2458]: E1213 01:48:36.070049 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:36.071842 containerd[1449]: time="2024-12-13T01:48:36.071803856Z" level=info msg="CreateContainer within sandbox \"e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:48:36.084932 containerd[1449]: time="2024-12-13T01:48:36.082743710Z" level=info msg="CreateContainer within sandbox \"e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85934df1e4e15a7d1266455b998518965c1aadc83bcdb6fafc390de4c523dc49\"" Dec 13 01:48:36.085450 containerd[1449]: time="2024-12-13T01:48:36.085422427Z" level=info msg="StartContainer for \"85934df1e4e15a7d1266455b998518965c1aadc83bcdb6fafc390de4c523dc49\"" Dec 13 01:48:36.113738 systemd[1]: Started cri-containerd-85934df1e4e15a7d1266455b998518965c1aadc83bcdb6fafc390de4c523dc49.scope - libcontainer container 85934df1e4e15a7d1266455b998518965c1aadc83bcdb6fafc390de4c523dc49. Dec 13 01:48:36.133241 containerd[1449]: time="2024-12-13T01:48:36.133187036Z" level=info msg="StartContainer for \"85934df1e4e15a7d1266455b998518965c1aadc83bcdb6fafc390de4c523dc49\" returns successfully" Dec 13 01:48:36.512719 systemd-networkd[1379]: cali292a06fe556: Gained IPv6LL Dec 13 01:48:36.961172 systemd-networkd[1379]: calicca0b7f8fd1: Gained IPv6LL Dec 13 01:48:36.978523 kubelet[2458]: E1213 01:48:36.978484 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:36.978824 kubelet[2458]: E1213 01:48:36.978754 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:36.994777 kubelet[2458]: I1213 01:48:36.994723 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hwhh6" podStartSLOduration=30.994705819 podStartE2EDuration="30.994705819s" podCreationTimestamp="2024-12-13 01:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:36.992655777 +0000 UTC m=+37.309301832" watchObservedRunningTime="2024-12-13 01:48:36.994705819 +0000 UTC m=+37.311351874" Dec 13 01:48:37.141981 containerd[1449]: time="2024-12-13T01:48:37.141935648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:37.143668 containerd[1449]: time="2024-12-13T01:48:37.142674813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:48:37.150892 containerd[1449]: time="2024-12-13T01:48:37.150850155Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:37.153870 containerd[1449]: time="2024-12-13T01:48:37.153832259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:37.161214 containerd[1449]: time="2024-12-13T01:48:37.161004406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.790716718s" Dec 13 01:48:37.161214 containerd[1449]: time="2024-12-13T01:48:37.161038090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:48:37.163527 containerd[1449]: time="2024-12-13T01:48:37.163501054Z" level=info msg="CreateContainer within sandbox \"495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:48:37.173267 containerd[1449]: time="2024-12-13T01:48:37.173157447Z" level=info msg="CreateContainer within sandbox \"495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a14d363880618eee6e6f23ee799bfcf26c7eb63f0a818a0d260e5d9b02c482fd\"" Dec 13 01:48:37.173747 containerd[1449]: time="2024-12-13T01:48:37.173721672Z" level=info msg="StartContainer for \"a14d363880618eee6e6f23ee799bfcf26c7eb63f0a818a0d260e5d9b02c482fd\"" Dec 13 01:48:37.222795 systemd[1]: Started cri-containerd-a14d363880618eee6e6f23ee799bfcf26c7eb63f0a818a0d260e5d9b02c482fd.scope - libcontainer container a14d363880618eee6e6f23ee799bfcf26c7eb63f0a818a0d260e5d9b02c482fd. Dec 13 01:48:37.283233 systemd-networkd[1379]: calica6042ecefd: Gained IPv6LL Dec 13 01:48:37.306022 containerd[1449]: time="2024-12-13T01:48:37.305857063Z" level=info msg="StartContainer for \"a14d363880618eee6e6f23ee799bfcf26c7eb63f0a818a0d260e5d9b02c482fd\" returns successfully" Dec 13 01:48:37.765101 containerd[1449]: time="2024-12-13T01:48:37.764742878Z" level=info msg="StopPodSandbox for \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\"" Dec 13 01:48:37.765506 containerd[1449]: time="2024-12-13T01:48:37.765468601Z" level=info msg="StopPodSandbox for \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\"" Dec 13 01:48:37.771722 containerd[1449]: time="2024-12-13T01:48:37.771680437Z" level=info msg="StopPodSandbox for \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\"" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.830 [INFO][4322] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.830 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" iface="eth0" netns="/var/run/netns/cni-f88ec5c3-b04b-f551-0a8e-17ae25fb593d" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.831 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" iface="eth0" netns="/var/run/netns/cni-f88ec5c3-b04b-f551-0a8e-17ae25fb593d" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.831 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" iface="eth0" netns="/var/run/netns/cni-f88ec5c3-b04b-f551-0a8e-17ae25fb593d" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.831 [INFO][4322] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.831 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.872 [INFO][4344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.872 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.872 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.888 [WARNING][4344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.889 [INFO][4344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.890 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:37.895698 containerd[1449]: 2024-12-13 01:48:37.892 [INFO][4322] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:48:37.899538 containerd[1449]: time="2024-12-13T01:48:37.899497451Z" level=info msg="TearDown network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\" successfully" Dec 13 01:48:37.899866 containerd[1449]: time="2024-12-13T01:48:37.899720076Z" level=info msg="StopPodSandbox for \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\" returns successfully" Dec 13 01:48:37.901561 containerd[1449]: time="2024-12-13T01:48:37.901529805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d4845d985-cnl5g,Uid:15fcbc53-768f-4fca-83b6-c146a5c50cc1,Namespace:calico-system,Attempt:1,}" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.842 [INFO][4329] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.842 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" iface="eth0" netns="/var/run/netns/cni-c725112a-6d09-26fd-4f26-e9e9d76b393c" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.842 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" iface="eth0" netns="/var/run/netns/cni-c725112a-6d09-26fd-4f26-e9e9d76b393c" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.842 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" iface="eth0" netns="/var/run/netns/cni-c725112a-6d09-26fd-4f26-e9e9d76b393c" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.843 [INFO][4329] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.843 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.877 [INFO][4354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.877 [INFO][4354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.890 [INFO][4354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.902 [WARNING][4354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.902 [INFO][4354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.903 [INFO][4354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:37.909961 containerd[1449]: 2024-12-13 01:48:37.906 [INFO][4329] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:48:37.909961 containerd[1449]: time="2024-12-13T01:48:37.909310582Z" level=info msg="TearDown network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\" successfully" Dec 13 01:48:37.909961 containerd[1449]: time="2024-12-13T01:48:37.909331304Z" level=info msg="StopPodSandbox for \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\" returns successfully" Dec 13 01:48:37.909961 containerd[1449]: time="2024-12-13T01:48:37.909849724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tv6tv,Uid:1079435b-e60b-443f-932d-e02d21e8e429,Namespace:calico-system,Attempt:1,}" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.839 [INFO][4320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.839 [INFO][4320] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" iface="eth0" netns="/var/run/netns/cni-736f4628-2adf-9058-b12e-1c97fd5644dd" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.839 [INFO][4320] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" iface="eth0" netns="/var/run/netns/cni-736f4628-2adf-9058-b12e-1c97fd5644dd" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.839 [INFO][4320] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" iface="eth0" netns="/var/run/netns/cni-736f4628-2adf-9058-b12e-1c97fd5644dd" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.839 [INFO][4320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.839 [INFO][4320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.881 [INFO][4349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.881 [INFO][4349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.903 [INFO][4349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.914 [WARNING][4349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.914 [INFO][4349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.916 [INFO][4349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:37.920610 containerd[1449]: 2024-12-13 01:48:37.918 [INFO][4320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:48:37.922071 containerd[1449]: time="2024-12-13T01:48:37.921976962Z" level=info msg="TearDown network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\" successfully" Dec 13 01:48:37.922071 containerd[1449]: time="2024-12-13T01:48:37.922067652Z" level=info msg="StopPodSandbox for \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\" returns successfully" Dec 13 01:48:37.924118 containerd[1449]: time="2024-12-13T01:48:37.924076764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568767dfbd-c99fj,Uid:1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:48:37.986038 kubelet[2458]: E1213 01:48:37.986003 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:37.987850 kubelet[2458]: E1213 01:48:37.986339 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:38.024305 systemd[1]: run-netns-cni\x2dc725112a\x2d6d09\x2d26fd\x2d4f26\x2de9e9d76b393c.mount: Deactivated successfully. Dec 13 01:48:38.024400 systemd[1]: run-netns-cni\x2d736f4628\x2d2adf\x2d9058\x2db12e\x2d1c97fd5644dd.mount: Deactivated successfully. Dec 13 01:48:38.024449 systemd[1]: run-netns-cni\x2df88ec5c3\x2db04b\x2df551\x2d0a8e\x2d17ae25fb593d.mount: Deactivated successfully. Dec 13 01:48:38.036521 kubelet[2458]: I1213 01:48:38.036164 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-568767dfbd-qv8xv" podStartSLOduration=23.244035181 podStartE2EDuration="25.036145821s" podCreationTimestamp="2024-12-13 01:48:13 +0000 UTC" firstStartedPulling="2024-12-13 01:48:35.369929885 +0000 UTC m=+35.686575940" lastFinishedPulling="2024-12-13 01:48:37.162040525 +0000 UTC m=+37.478686580" observedRunningTime="2024-12-13 01:48:38.002756467 +0000 UTC m=+38.319402522" watchObservedRunningTime="2024-12-13 01:48:38.036145821 +0000 UTC m=+38.352791876" Dec 13 01:48:38.137267 systemd-networkd[1379]: calic68c9ceec85: Link UP Dec 13 01:48:38.138958 systemd-networkd[1379]: calic68c9ceec85: Gained carrier Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.009 [INFO][4382] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tv6tv-eth0 csi-node-driver- calico-system 1079435b-e60b-443f-932d-e02d21e8e429 855 0 2024-12-13 01:48:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tv6tv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic68c9ceec85 [] []}} ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Namespace="calico-system" Pod="csi-node-driver-tv6tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tv6tv-" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.009 [INFO][4382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Namespace="calico-system" Pod="csi-node-driver-tv6tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.056 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" HandleID="k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.070 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" HandleID="k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tv6tv", "timestamp":"2024-12-13 01:48:38.056239041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.070 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.070 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.070 [INFO][4418] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.077 [INFO][4418] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.086 [INFO][4418] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.096 [INFO][4418] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.097 [INFO][4418] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.100 [INFO][4418] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.100 [INFO][4418] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.101 [INFO][4418] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4 Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.107 [INFO][4418] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.123 [INFO][4418] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.123 [INFO][4418] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" host="localhost" Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.124 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:38.186615 containerd[1449]: 2024-12-13 01:48:38.124 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" HandleID="k8s-pod-network.e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:38.189447 containerd[1449]: 2024-12-13 01:48:38.131 [INFO][4382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Namespace="calico-system" Pod="csi-node-driver-tv6tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tv6tv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tv6tv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1079435b-e60b-443f-932d-e02d21e8e429", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tv6tv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic68c9ceec85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:38.189447 containerd[1449]: 2024-12-13 01:48:38.132 [INFO][4382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Namespace="calico-system" Pod="csi-node-driver-tv6tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:38.189447 containerd[1449]: 2024-12-13 01:48:38.132 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic68c9ceec85 ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Namespace="calico-system" Pod="csi-node-driver-tv6tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:38.189447 containerd[1449]: 2024-12-13 01:48:38.138 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Namespace="calico-system" Pod="csi-node-driver-tv6tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:38.189447 containerd[1449]: 2024-12-13 01:48:38.139 [INFO][4382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Namespace="calico-system" Pod="csi-node-driver-tv6tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tv6tv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tv6tv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1079435b-e60b-443f-932d-e02d21e8e429", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4", Pod:"csi-node-driver-tv6tv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic68c9ceec85", MAC:"7e:81:ef:3e:5a:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:38.189447 containerd[1449]: 2024-12-13 01:48:38.184 [INFO][4382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4" Namespace="calico-system" Pod="csi-node-driver-tv6tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:48:38.215271 containerd[1449]: time="2024-12-13T01:48:38.215162270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:38.215271 containerd[1449]: time="2024-12-13T01:48:38.215224477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:38.215271 containerd[1449]: time="2024-12-13T01:48:38.215239719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:38.215484 containerd[1449]: time="2024-12-13T01:48:38.215324889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:38.250748 systemd[1]: Started cri-containerd-e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4.scope - libcontainer container e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4. Dec 13 01:48:38.255672 kubelet[2458]: I1213 01:48:38.255638 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:38.256024 kubelet[2458]: E1213 01:48:38.256004 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:38.268445 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:38.314099 containerd[1449]: time="2024-12-13T01:48:38.314005665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tv6tv,Uid:1079435b-e60b-443f-932d-e02d21e8e429,Namespace:calico-system,Attempt:1,} returns sandbox id \"e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4\"" Dec 13 01:48:38.318173 systemd-networkd[1379]: cali4a4da330dbf: Link UP Dec 13 01:48:38.318398 systemd-networkd[1379]: cali4a4da330dbf: Gained carrier Dec 13 01:48:38.322782 containerd[1449]: time="2024-12-13T01:48:38.322745767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:48:38.327821 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:54768.service - OpenSSH per-connection server daemon (10.0.0.1:54768). Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:37.987 [INFO][4369] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0 calico-kube-controllers-5d4845d985- calico-system 15fcbc53-768f-4fca-83b6-c146a5c50cc1 853 0 2024-12-13 01:48:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d4845d985 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d4845d985-cnl5g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a4da330dbf [] []}} ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Namespace="calico-system" Pod="calico-kube-controllers-5d4845d985-cnl5g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:37.987 [INFO][4369] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Namespace="calico-system" Pod="calico-kube-controllers-5d4845d985-cnl5g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.059 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" HandleID="k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.080 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" HandleID="k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d4845d985-cnl5g", "timestamp":"2024-12-13 01:48:38.059336909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.080 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.124 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.125 [INFO][4410] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.181 [INFO][4410] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.270 [INFO][4410] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.281 [INFO][4410] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.283 [INFO][4410] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.288 [INFO][4410] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.289 [INFO][4410] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.293 [INFO][4410] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0 Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.299 [INFO][4410] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.305 [INFO][4410] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.306 [INFO][4410] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" host="localhost" Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.306 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:38.337335 containerd[1449]: 2024-12-13 01:48:38.306 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" HandleID="k8s-pod-network.816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:38.337930 containerd[1449]: 2024-12-13 01:48:38.312 [INFO][4369] cni-plugin/k8s.go 386: Populated endpoint ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Namespace="calico-system" Pod="calico-kube-controllers-5d4845d985-cnl5g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0", GenerateName:"calico-kube-controllers-5d4845d985-", Namespace:"calico-system", SelfLink:"", UID:"15fcbc53-768f-4fca-83b6-c146a5c50cc1", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d4845d985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d4845d985-cnl5g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a4da330dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:38.337930 containerd[1449]: 2024-12-13 01:48:38.312 [INFO][4369] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Namespace="calico-system" Pod="calico-kube-controllers-5d4845d985-cnl5g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:38.337930 containerd[1449]: 2024-12-13 01:48:38.313 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a4da330dbf ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Namespace="calico-system" Pod="calico-kube-controllers-5d4845d985-cnl5g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:38.337930 containerd[1449]: 2024-12-13 01:48:38.318 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Namespace="calico-system" Pod="calico-kube-controllers-5d4845d985-cnl5g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:38.337930 containerd[1449]: 2024-12-13 01:48:38.321 [INFO][4369] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Namespace="calico-system" Pod="calico-kube-controllers-5d4845d985-cnl5g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0", GenerateName:"calico-kube-controllers-5d4845d985-", Namespace:"calico-system", SelfLink:"", UID:"15fcbc53-768f-4fca-83b6-c146a5c50cc1", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d4845d985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0", Pod:"calico-kube-controllers-5d4845d985-cnl5g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a4da330dbf", MAC:"a2:a8:7e:fb:bb:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:38.337930 containerd[1449]: 2024-12-13 01:48:38.333 [INFO][4369] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0" Namespace="calico-system" Pod="calico-kube-controllers-5d4845d985-cnl5g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:48:38.369316 containerd[1449]: time="2024-12-13T01:48:38.368687413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:38.369316 containerd[1449]: time="2024-12-13T01:48:38.368738379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:38.369316 containerd[1449]: time="2024-12-13T01:48:38.368748820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:38.369316 containerd[1449]: time="2024-12-13T01:48:38.368917319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:38.379439 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 54768 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:38.380692 sshd[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:38.390759 systemd[1]: Started cri-containerd-816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0.scope - libcontainer container 816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0. Dec 13 01:48:38.395831 systemd-logind[1427]: New session 8 of user core. Dec 13 01:48:38.396732 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:48:38.407249 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:38.413642 systemd-networkd[1379]: cali402b7700f56: Link UP Dec 13 01:48:38.413830 systemd-networkd[1379]: cali402b7700f56: Gained carrier Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.052 [INFO][4395] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0 calico-apiserver-568767dfbd- calico-apiserver 1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5 854 0 2024-12-13 01:48:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:568767dfbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-568767dfbd-c99fj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali402b7700f56 [] []}} ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-c99fj" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--c99fj-" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.053 [INFO][4395] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-c99fj" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.103 [INFO][4429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" HandleID="k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.269 [INFO][4429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" HandleID="k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e1950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-568767dfbd-c99fj", "timestamp":"2024-12-13 01:48:38.103384822 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.269 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.306 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.306 [INFO][4429] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.310 [INFO][4429] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.368 [INFO][4429] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.377 [INFO][4429] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.380 [INFO][4429] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.383 [INFO][4429] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.383 [INFO][4429] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.387 [INFO][4429] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492 Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.391 [INFO][4429] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.402 [INFO][4429] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.402 [INFO][4429] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" host="localhost" Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.402 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:38.430564 containerd[1449]: 2024-12-13 01:48:38.402 [INFO][4429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" HandleID="k8s-pod-network.86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:38.431145 containerd[1449]: 2024-12-13 01:48:38.409 [INFO][4395] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-c99fj" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0", GenerateName:"calico-apiserver-568767dfbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568767dfbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-568767dfbd-c99fj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali402b7700f56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:38.431145 containerd[1449]: 2024-12-13 01:48:38.410 [INFO][4395] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-c99fj" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:38.431145 containerd[1449]: 2024-12-13 01:48:38.410 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali402b7700f56 ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-c99fj" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:38.431145 containerd[1449]: 2024-12-13 01:48:38.412 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-c99fj" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:38.431145 containerd[1449]: 2024-12-13 01:48:38.412 [INFO][4395] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-c99fj" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0", GenerateName:"calico-apiserver-568767dfbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568767dfbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492", Pod:"calico-apiserver-568767dfbd-c99fj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali402b7700f56", MAC:"9e:d3:41:eb:ab:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:38.431145 containerd[1449]: 2024-12-13 01:48:38.423 [INFO][4395] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492" Namespace="calico-apiserver" Pod="calico-apiserver-568767dfbd-c99fj" WorkloadEndpoint="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:48:38.443299 containerd[1449]: time="2024-12-13T01:48:38.443262279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d4845d985-cnl5g,Uid:15fcbc53-768f-4fca-83b6-c146a5c50cc1,Namespace:calico-system,Attempt:1,} returns sandbox id \"816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0\"" Dec 13 01:48:38.470749 containerd[1449]: time="2024-12-13T01:48:38.469491668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:38.470749 containerd[1449]: time="2024-12-13T01:48:38.469562756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:38.470749 containerd[1449]: time="2024-12-13T01:48:38.469578078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:38.470749 containerd[1449]: time="2024-12-13T01:48:38.469744216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:38.492138 systemd[1]: Started cri-containerd-86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492.scope - libcontainer container 86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492. Dec 13 01:48:38.513459 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:48:38.559992 containerd[1449]: time="2024-12-13T01:48:38.559935958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568767dfbd-c99fj,Uid:1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492\"" Dec 13 01:48:38.565044 containerd[1449]: time="2024-12-13T01:48:38.564953482Z" level=info msg="CreateContainer within sandbox \"86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:48:38.574610 containerd[1449]: time="2024-12-13T01:48:38.574519677Z" level=info msg="CreateContainer within sandbox \"86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1bf3da22968fe097d94ee985cd619c822daa3fd96fd2b40665193de6edb9259\"" Dec 13 01:48:38.575235 containerd[1449]: time="2024-12-13T01:48:38.575101343Z" level=info msg="StartContainer for \"c1bf3da22968fe097d94ee985cd619c822daa3fd96fd2b40665193de6edb9259\"" Dec 13 01:48:38.603746 systemd[1]: Started cri-containerd-c1bf3da22968fe097d94ee985cd619c822daa3fd96fd2b40665193de6edb9259.scope - libcontainer container c1bf3da22968fe097d94ee985cd619c822daa3fd96fd2b40665193de6edb9259. Dec 13 01:48:38.634751 containerd[1449]: time="2024-12-13T01:48:38.634698284Z" level=info msg="StartContainer for \"c1bf3da22968fe097d94ee985cd619c822daa3fd96fd2b40665193de6edb9259\" returns successfully" Dec 13 01:48:38.685717 sshd[4507]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:38.688501 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:54768.service: Deactivated successfully. Dec 13 01:48:38.690396 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:48:38.691274 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:48:38.693185 systemd-logind[1427]: Removed session 8. Dec 13 01:48:38.993629 kubelet[2458]: I1213 01:48:38.992545 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:38.993629 kubelet[2458]: E1213 01:48:38.992870 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:38.994610 kubelet[2458]: E1213 01:48:38.994525 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:39.001556 kubelet[2458]: I1213 01:48:39.001508 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-568767dfbd-c99fj" podStartSLOduration=26.001492685 podStartE2EDuration="26.001492685s" podCreationTimestamp="2024-12-13 01:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:39.001121684 +0000 UTC m=+39.317767739" watchObservedRunningTime="2024-12-13 01:48:39.001492685 +0000 UTC m=+39.318138740" Dec 13 01:48:39.266275 systemd-networkd[1379]: calic68c9ceec85: Gained IPv6LL Dec 13 01:48:39.306791 containerd[1449]: time="2024-12-13T01:48:39.306676632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:39.313289 containerd[1449]: time="2024-12-13T01:48:39.307604254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:48:39.313382 containerd[1449]: time="2024-12-13T01:48:39.308844910Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:39.313479 containerd[1449]: time="2024-12-13T01:48:39.312046501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 989.260769ms" Dec 13 01:48:39.313510 containerd[1449]: time="2024-12-13T01:48:39.313476938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:48:39.314711 containerd[1449]: time="2024-12-13T01:48:39.314565338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:39.315791 containerd[1449]: time="2024-12-13T01:48:39.315757309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:48:39.317328 containerd[1449]: time="2024-12-13T01:48:39.317245192Z" level=info msg="CreateContainer within sandbox \"e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:48:39.338917 containerd[1449]: time="2024-12-13T01:48:39.338874647Z" level=info msg="CreateContainer within sandbox \"e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cb065900a91862a3f8d0f263c459ad07ef4827b10ad6806d380bc8764017f520\"" Dec 13 01:48:39.340646 containerd[1449]: time="2024-12-13T01:48:39.339332177Z" level=info msg="StartContainer for \"cb065900a91862a3f8d0f263c459ad07ef4827b10ad6806d380bc8764017f520\"" Dec 13 01:48:39.370795 systemd[1]: Started cri-containerd-cb065900a91862a3f8d0f263c459ad07ef4827b10ad6806d380bc8764017f520.scope - libcontainer container cb065900a91862a3f8d0f263c459ad07ef4827b10ad6806d380bc8764017f520. Dec 13 01:48:39.397825 containerd[1449]: time="2024-12-13T01:48:39.397777914Z" level=info msg="StartContainer for \"cb065900a91862a3f8d0f263c459ad07ef4827b10ad6806d380bc8764017f520\" returns successfully" Dec 13 01:48:39.520712 systemd-networkd[1379]: cali4a4da330dbf: Gained IPv6LL Dec 13 01:48:39.906020 systemd-networkd[1379]: cali402b7700f56: Gained IPv6LL Dec 13 01:48:40.002667 kubelet[2458]: I1213 01:48:40.002550 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:40.002995 kubelet[2458]: E1213 01:48:40.002893 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:40.015558 systemd[1]: run-containerd-runc-k8s.io-cb065900a91862a3f8d0f263c459ad07ef4827b10ad6806d380bc8764017f520-runc.MinsjJ.mount: Deactivated successfully. Dec 13 01:48:43.702677 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:56104.service - OpenSSH per-connection server daemon (10.0.0.1:56104). Dec 13 01:48:43.747360 sshd[4760]: Accepted publickey for core from 10.0.0.1 port 56104 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:43.748865 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:43.752873 systemd-logind[1427]: New session 9 of user core. Dec 13 01:48:43.764771 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:48:43.981142 sshd[4760]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:43.984868 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:56104.service: Deactivated successfully. Dec 13 01:48:43.986709 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:48:43.988724 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:48:43.989528 systemd-logind[1427]: Removed session 9. Dec 13 01:48:48.992215 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:56108.service - OpenSSH per-connection server daemon (10.0.0.1:56108). Dec 13 01:48:49.028670 sshd[4784]: Accepted publickey for core from 10.0.0.1 port 56108 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:49.029648 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:49.034000 systemd-logind[1427]: New session 10 of user core. Dec 13 01:48:49.039783 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:48:49.192293 sshd[4784]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:49.202274 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:56108.service: Deactivated successfully. Dec 13 01:48:49.204143 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:48:49.207876 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:48:49.218884 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:56118.service - OpenSSH per-connection server daemon (10.0.0.1:56118). Dec 13 01:48:49.220492 systemd-logind[1427]: Removed session 10. Dec 13 01:48:49.255577 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 56118 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:49.257042 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:49.260669 systemd-logind[1427]: New session 11 of user core. Dec 13 01:48:49.274865 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:48:49.486997 sshd[4800]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:49.496983 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:56118.service: Deactivated successfully. Dec 13 01:48:49.498865 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:48:49.501814 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:48:49.510739 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:56124.service - OpenSSH per-connection server daemon (10.0.0.1:56124). Dec 13 01:48:49.512028 systemd-logind[1427]: Removed session 11. Dec 13 01:48:49.546039 sshd[4813]: Accepted publickey for core from 10.0.0.1 port 56124 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:49.547462 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:49.551550 systemd-logind[1427]: New session 12 of user core. Dec 13 01:48:49.560744 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:48:49.710645 sshd[4813]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:49.713947 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:56124.service: Deactivated successfully. Dec 13 01:48:49.717026 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:48:49.717638 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:48:49.718700 systemd-logind[1427]: Removed session 12. Dec 13 01:48:54.725271 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:37170.service - OpenSSH per-connection server daemon (10.0.0.1:37170). Dec 13 01:48:54.762341 sshd[4831]: Accepted publickey for core from 10.0.0.1 port 37170 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:54.763905 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:54.768263 systemd-logind[1427]: New session 13 of user core. Dec 13 01:48:54.778740 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:48:54.922383 sshd[4831]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:54.925610 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:37170.service: Deactivated successfully. Dec 13 01:48:54.927333 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:48:54.927961 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:48:54.928852 systemd-logind[1427]: Removed session 13. Dec 13 01:48:59.761037 containerd[1449]: time="2024-12-13T01:48:59.760983924Z" level=info msg="StopPodSandbox for \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\"" Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.803 [WARNING][4862] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1e0c6640-314d-45ec-b91c-da2f72cfd50a", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e", Pod:"coredns-6f6b679f8f-hwhh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica6042ecefd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.803 [INFO][4862] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.803 [INFO][4862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" iface="eth0" netns="" Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.803 [INFO][4862] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.803 [INFO][4862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.827 [INFO][4871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.827 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.827 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.835 [WARNING][4871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.835 [INFO][4871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.837 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:59.840089 containerd[1449]: 2024-12-13 01:48:59.838 [INFO][4862] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:59.840089 containerd[1449]: time="2024-12-13T01:48:59.839968881Z" level=info msg="TearDown network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\" successfully" Dec 13 01:48:59.840089 containerd[1449]: time="2024-12-13T01:48:59.839992722Z" level=info msg="StopPodSandbox for \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\" returns successfully" Dec 13 01:48:59.840685 containerd[1449]: time="2024-12-13T01:48:59.840534846Z" level=info msg="RemovePodSandbox for \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\"" Dec 13 01:48:59.840685 containerd[1449]: time="2024-12-13T01:48:59.840567529Z" level=info msg="Forcibly stopping sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\"" Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.875 [WARNING][4894] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1e0c6640-314d-45ec-b91c-da2f72cfd50a", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e769e57ed6691c53b4a9dc8f993294cd7273311f7c4582e91f28b41db26e750e", Pod:"coredns-6f6b679f8f-hwhh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica6042ecefd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.875 [INFO][4894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.875 [INFO][4894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" iface="eth0" netns="" Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.875 [INFO][4894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.875 [INFO][4894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.893 [INFO][4901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.893 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.893 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.901 [WARNING][4901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.902 [INFO][4901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" HandleID="k8s-pod-network.6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Workload="localhost-k8s-coredns--6f6b679f8f--hwhh6-eth0" Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.903 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:48:59.906476 containerd[1449]: 2024-12-13 01:48:59.905 [INFO][4894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236" Dec 13 01:48:59.907035 containerd[1449]: time="2024-12-13T01:48:59.906511109Z" level=info msg="TearDown network for sandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\" successfully" Dec 13 01:48:59.926293 containerd[1449]: time="2024-12-13T01:48:59.926241947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:48:59.926396 containerd[1449]: time="2024-12-13T01:48:59.926323514Z" level=info msg="RemovePodSandbox \"6e05dacb397eb09bdb17cafb5d5150dcb1c0fa877ba386d38c38595f8ec6d236\" returns successfully" Dec 13 01:48:59.927114 containerd[1449]: time="2024-12-13T01:48:59.926857477Z" level=info msg="StopPodSandbox for \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\"" Dec 13 01:48:59.933854 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). Dec 13 01:48:59.983662 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:59.986375 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:59.994440 systemd-logind[1427]: New session 14 of user core. Dec 13 01:48:59.997968 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:48:59.971 [WARNING][4926] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tv6tv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1079435b-e60b-443f-932d-e02d21e8e429", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4", Pod:"csi-node-driver-tv6tv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic68c9ceec85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:48:59.971 [INFO][4926] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:48:59.971 [INFO][4926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" iface="eth0" netns="" Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:48:59.971 [INFO][4926] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:48:59.971 [INFO][4926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:48:59.993 [INFO][4935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:48:59.993 [INFO][4935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:48:59.993 [INFO][4935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:49:00.004 [WARNING][4935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:49:00.004 [INFO][4935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:49:00.006 [INFO][4935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.009292 containerd[1449]: 2024-12-13 01:49:00.007 [INFO][4926] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:49:00.009947 containerd[1449]: time="2024-12-13T01:49:00.009350231Z" level=info msg="TearDown network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\" successfully" Dec 13 01:49:00.009947 containerd[1449]: time="2024-12-13T01:49:00.009375553Z" level=info msg="StopPodSandbox for \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\" returns successfully" Dec 13 01:49:00.009947 containerd[1449]: time="2024-12-13T01:49:00.009894035Z" level=info msg="RemovePodSandbox for \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\"" Dec 13 01:49:00.009947 containerd[1449]: time="2024-12-13T01:49:00.009933838Z" level=info msg="Forcibly stopping sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\"" Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.042 [WARNING][4958] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tv6tv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1079435b-e60b-443f-932d-e02d21e8e429", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4", Pod:"csi-node-driver-tv6tv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic68c9ceec85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.042 [INFO][4958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.042 [INFO][4958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" iface="eth0" netns="" Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.042 [INFO][4958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.042 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.065 [INFO][4965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.065 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.065 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.074 [WARNING][4965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.075 [INFO][4965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" HandleID="k8s-pod-network.9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Workload="localhost-k8s-csi--node--driver--tv6tv-eth0" Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.077 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.080751 containerd[1449]: 2024-12-13 01:49:00.079 [INFO][4958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08" Dec 13 01:49:00.081266 containerd[1449]: time="2024-12-13T01:49:00.080759045Z" level=info msg="TearDown network for sandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\" successfully" Dec 13 01:49:00.084101 containerd[1449]: time="2024-12-13T01:49:00.084066551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:00.084157 containerd[1449]: time="2024-12-13T01:49:00.084129116Z" level=info msg="RemovePodSandbox \"9b97d4a163e2480f0a12e827e359cb721e036f51d99ffbb29491ec33ba750e08\" returns successfully" Dec 13 01:49:00.084648 containerd[1449]: time="2024-12-13T01:49:00.084577912Z" level=info msg="StopPodSandbox for \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\"" Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.120 [WARNING][4996] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0", GenerateName:"calico-apiserver-568767dfbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568767dfbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492", Pod:"calico-apiserver-568767dfbd-c99fj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali402b7700f56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.120 [INFO][4996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.120 [INFO][4996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" iface="eth0" netns="" Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.120 [INFO][4996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.120 [INFO][4996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.144 [INFO][5003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.144 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.144 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.156 [WARNING][5003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.156 [INFO][5003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.157 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.161386 containerd[1449]: 2024-12-13 01:49:00.159 [INFO][4996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:49:00.161818 containerd[1449]: time="2024-12-13T01:49:00.161413282Z" level=info msg="TearDown network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\" successfully" Dec 13 01:49:00.161818 containerd[1449]: time="2024-12-13T01:49:00.161438484Z" level=info msg="StopPodSandbox for \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\" returns successfully" Dec 13 01:49:00.161961 containerd[1449]: time="2024-12-13T01:49:00.161890880Z" level=info msg="RemovePodSandbox for \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\"" Dec 13 01:49:00.161989 containerd[1449]: time="2024-12-13T01:49:00.161968846Z" level=info msg="Forcibly stopping sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\"" Dec 13 01:49:00.209705 sshd[4925]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:00.214147 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:37182.service: Deactivated successfully. Dec 13 01:49:00.216616 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:49:00.217516 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:49:00.218473 systemd-logind[1427]: Removed session 14. Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.200 [WARNING][5026] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0", GenerateName:"calico-apiserver-568767dfbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e6eb865-39dc-4ee5-9cdc-f699e1d6b8f5", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568767dfbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86f256d0eeb5f6155eeec9f376c9b2b3d82ed845a5fbc08c42fc7d3992b39492", Pod:"calico-apiserver-568767dfbd-c99fj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali402b7700f56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.200 [INFO][5026] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.200 [INFO][5026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" iface="eth0" netns="" Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.200 [INFO][5026] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.200 [INFO][5026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.221 [INFO][5034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.222 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.222 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.230 [WARNING][5034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.230 [INFO][5034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" HandleID="k8s-pod-network.00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Workload="localhost-k8s-calico--apiserver--568767dfbd--c99fj-eth0" Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.231 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.234942 containerd[1449]: 2024-12-13 01:49:00.233 [INFO][5026] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c" Dec 13 01:49:00.235282 containerd[1449]: time="2024-12-13T01:49:00.234981589Z" level=info msg="TearDown network for sandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\" successfully" Dec 13 01:49:00.237777 containerd[1449]: time="2024-12-13T01:49:00.237748891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:00.237820 containerd[1449]: time="2024-12-13T01:49:00.237803776Z" level=info msg="RemovePodSandbox \"00abbd5d43ffcc0c24905f5b2367a2c892ac5cbc2d2cc5c32bba7f8ee2a5587c\" returns successfully" Dec 13 01:49:00.238250 containerd[1449]: time="2024-12-13T01:49:00.238228210Z" level=info msg="StopPodSandbox for \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\"" Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.270 [WARNING][5059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0", GenerateName:"calico-kube-controllers-5d4845d985-", Namespace:"calico-system", SelfLink:"", UID:"15fcbc53-768f-4fca-83b6-c146a5c50cc1", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d4845d985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0", Pod:"calico-kube-controllers-5d4845d985-cnl5g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a4da330dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.271 [INFO][5059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.271 [INFO][5059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" iface="eth0" netns="" Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.271 [INFO][5059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.271 [INFO][5059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.288 [INFO][5067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.288 [INFO][5067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.288 [INFO][5067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.296 [WARNING][5067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.296 [INFO][5067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.297 [INFO][5067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.300982 containerd[1449]: 2024-12-13 01:49:00.299 [INFO][5059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:49:00.300982 containerd[1449]: time="2024-12-13T01:49:00.300857639Z" level=info msg="TearDown network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\" successfully" Dec 13 01:49:00.300982 containerd[1449]: time="2024-12-13T01:49:00.300882721Z" level=info msg="StopPodSandbox for \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\" returns successfully" Dec 13 01:49:00.301504 containerd[1449]: time="2024-12-13T01:49:00.301353239Z" level=info msg="RemovePodSandbox for \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\"" Dec 13 01:49:00.301504 containerd[1449]: time="2024-12-13T01:49:00.301383361Z" level=info msg="Forcibly stopping sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\"" Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.334 [WARNING][5089] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0", GenerateName:"calico-kube-controllers-5d4845d985-", Namespace:"calico-system", SelfLink:"", UID:"15fcbc53-768f-4fca-83b6-c146a5c50cc1", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d4845d985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0", Pod:"calico-kube-controllers-5d4845d985-cnl5g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a4da330dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.335 [INFO][5089] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.335 [INFO][5089] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" iface="eth0" netns="" Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.335 [INFO][5089] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.335 [INFO][5089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.353 [INFO][5097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.353 [INFO][5097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.354 [INFO][5097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.361 [WARNING][5097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.361 [INFO][5097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" HandleID="k8s-pod-network.e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Workload="localhost-k8s-calico--kube--controllers--5d4845d985--cnl5g-eth0" Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.363 [INFO][5097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.366689 containerd[1449]: 2024-12-13 01:49:00.364 [INFO][5089] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84" Dec 13 01:49:00.366689 containerd[1449]: time="2024-12-13T01:49:00.366022031Z" level=info msg="TearDown network for sandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\" successfully" Dec 13 01:49:00.379709 containerd[1449]: time="2024-12-13T01:49:00.379661647Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:00.379923 containerd[1449]: time="2024-12-13T01:49:00.379726212Z" level=info msg="RemovePodSandbox \"e9e6f61c4c1ed3323381b693883ae8a6d5fbbc71f5b753b0da1bc35d5d9a0e84\" returns successfully" Dec 13 01:49:00.380224 containerd[1449]: time="2024-12-13T01:49:00.380182088Z" level=info msg="StopPodSandbox for \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\"" Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.412 [WARNING][5119] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd", Pod:"coredns-6f6b679f8f-ccdvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali292a06fe556", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.413 [INFO][5119] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.413 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" iface="eth0" netns="" Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.413 [INFO][5119] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.413 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.430 [INFO][5126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.431 [INFO][5126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.431 [INFO][5126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.439 [WARNING][5126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.439 [INFO][5126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.440 [INFO][5126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.443498 containerd[1449]: 2024-12-13 01:49:00.441 [INFO][5119] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:49:00.443936 containerd[1449]: time="2024-12-13T01:49:00.443541176Z" level=info msg="TearDown network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\" successfully" Dec 13 01:49:00.443936 containerd[1449]: time="2024-12-13T01:49:00.443566578Z" level=info msg="StopPodSandbox for \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\" returns successfully" Dec 13 01:49:00.444420 containerd[1449]: time="2024-12-13T01:49:00.444365042Z" level=info msg="RemovePodSandbox for \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\"" Dec 13 01:49:00.444420 containerd[1449]: time="2024-12-13T01:49:00.444402405Z" level=info msg="Forcibly stopping sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\"" Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.476 [WARNING][5148] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ccb309b-ed47-4e0c-ae45-cf7bfdd924c2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845e11f6cab58dfc72328ffdbca08a5f2ababb3d7b27f4214e221b4d777197bd", Pod:"coredns-6f6b679f8f-ccdvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali292a06fe556", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.476 [INFO][5148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.476 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" iface="eth0" netns="" Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.476 [INFO][5148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.476 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.493 [INFO][5155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.493 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.493 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.501 [WARNING][5155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.501 [INFO][5155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" HandleID="k8s-pod-network.2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Workload="localhost-k8s-coredns--6f6b679f8f--ccdvt-eth0" Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.503 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.506203 containerd[1449]: 2024-12-13 01:49:00.504 [INFO][5148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa" Dec 13 01:49:00.506616 containerd[1449]: time="2024-12-13T01:49:00.506233650Z" level=info msg="TearDown network for sandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\" successfully" Dec 13 01:49:00.508883 containerd[1449]: time="2024-12-13T01:49:00.508849620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:00.508952 containerd[1449]: time="2024-12-13T01:49:00.508931947Z" level=info msg="RemovePodSandbox \"2f930d0151c8c21688b5032961988634c8927a89b6fc267d50b95e464f66b1aa\" returns successfully" Dec 13 01:49:00.509664 containerd[1449]: time="2024-12-13T01:49:00.509364181Z" level=info msg="StopPodSandbox for \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\"" Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.542 [WARNING][5178] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0", GenerateName:"calico-apiserver-568767dfbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2e2bdd9-b460-49cb-90c4-8e7578a0674d", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568767dfbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a", Pod:"calico-apiserver-568767dfbd-qv8xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicca0b7f8fd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.542 [INFO][5178] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.542 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" iface="eth0" netns="" Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.542 [INFO][5178] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.542 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.560 [INFO][5186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.560 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.560 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.568 [WARNING][5186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.568 [INFO][5186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.569 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.572068 containerd[1449]: 2024-12-13 01:49:00.570 [INFO][5178] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:49:00.572754 containerd[1449]: time="2024-12-13T01:49:00.572106339Z" level=info msg="TearDown network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\" successfully" Dec 13 01:49:00.572754 containerd[1449]: time="2024-12-13T01:49:00.572131822Z" level=info msg="StopPodSandbox for \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\" returns successfully" Dec 13 01:49:00.572754 containerd[1449]: time="2024-12-13T01:49:00.572657824Z" level=info msg="RemovePodSandbox for \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\"" Dec 13 01:49:00.572754 containerd[1449]: time="2024-12-13T01:49:00.572688306Z" level=info msg="Forcibly stopping sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\"" Dec 13 01:49:00.586863 kubelet[2458]: I1213 01:49:00.586187 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.616 [WARNING][5210] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0", GenerateName:"calico-apiserver-568767dfbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2e2bdd9-b460-49cb-90c4-8e7578a0674d", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568767dfbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"495c67e5285176da7c96c65d9230b6140943c508845c3d671c70d6d51b10f77a", Pod:"calico-apiserver-568767dfbd-qv8xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicca0b7f8fd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.616 [INFO][5210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.616 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" iface="eth0" netns="" Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.616 [INFO][5210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.616 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.640 [INFO][5219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.640 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.640 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.648 [WARNING][5219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.648 [INFO][5219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" HandleID="k8s-pod-network.f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Workload="localhost-k8s-calico--apiserver--568767dfbd--qv8xv-eth0" Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.649 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:00.652759 containerd[1449]: 2024-12-13 01:49:00.651 [INFO][5210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447" Dec 13 01:49:00.654346 containerd[1449]: time="2024-12-13T01:49:00.653185170Z" level=info msg="TearDown network for sandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\" successfully" Dec 13 01:49:00.655800 containerd[1449]: time="2024-12-13T01:49:00.655769257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:49:00.655939 containerd[1449]: time="2024-12-13T01:49:00.655921070Z" level=info msg="RemovePodSandbox \"f09f7173214e2fad38a268637ebf03211361e08c2c1a94f4ac8919f604795447\" returns successfully" Dec 13 01:49:02.535442 kubelet[2458]: I1213 01:49:02.535369 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:49:05.225620 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:58050.service - OpenSSH per-connection server daemon (10.0.0.1:58050). Dec 13 01:49:05.261972 sshd[5230]: Accepted publickey for core from 10.0.0.1 port 58050 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:05.263327 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:05.267320 systemd-logind[1427]: New session 15 of user core. Dec 13 01:49:05.278772 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:49:05.425984 sshd[5230]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:05.429827 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:58050.service: Deactivated successfully. Dec 13 01:49:05.431531 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:49:05.432132 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:49:05.433004 systemd-logind[1427]: Removed session 15. Dec 13 01:49:08.277706 systemd[1]: run-containerd-runc-k8s.io-eb2c7e08829a78aa6d31101b3f33aa2e7015284c6de298a5fcd17118f5e2e07a-runc.NYEtLm.mount: Deactivated successfully. Dec 13 01:49:08.377795 containerd[1449]: time="2024-12-13T01:49:08.377746639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:08.378291 containerd[1449]: time="2024-12-13T01:49:08.378259049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:49:08.379114 containerd[1449]: time="2024-12-13T01:49:08.379084842Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:08.381757 containerd[1449]: time="2024-12-13T01:49:08.381725492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:08.383020 containerd[1449]: time="2024-12-13T01:49:08.382990340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 29.06720311s" Dec 13 01:49:08.383137 containerd[1449]: time="2024-12-13T01:49:08.383118773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:49:08.384097 containerd[1449]: time="2024-12-13T01:49:08.384071718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:49:08.395712 containerd[1449]: time="2024-12-13T01:49:08.395683857Z" level=info msg="CreateContainer within sandbox \"816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:49:08.404292 containerd[1449]: time="2024-12-13T01:49:08.404178733Z" level=info msg="CreateContainer within sandbox \"816dfcd254b340bcd20b13b96f507cd88813bf23fed0e5b13fbed42f66bdbeb0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f2f9ff2c3f6f644e640acd50fee916362eaffb26947734cb86e50a2fc1fe60df\"" Dec 13 01:49:08.404681 containerd[1449]: time="2024-12-13T01:49:08.404655706Z" level=info msg="StartContainer for \"f2f9ff2c3f6f644e640acd50fee916362eaffb26947734cb86e50a2fc1fe60df\"" Dec 13 01:49:08.441758 systemd[1]: Started cri-containerd-f2f9ff2c3f6f644e640acd50fee916362eaffb26947734cb86e50a2fc1fe60df.scope - libcontainer container f2f9ff2c3f6f644e640acd50fee916362eaffb26947734cb86e50a2fc1fe60df. Dec 13 01:49:08.473862 containerd[1449]: time="2024-12-13T01:49:08.473180564Z" level=info msg="StartContainer for \"f2f9ff2c3f6f644e640acd50fee916362eaffb26947734cb86e50a2fc1fe60df\" returns successfully" Dec 13 01:49:09.086082 kubelet[2458]: I1213 01:49:09.085917 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d4845d985-cnl5g" podStartSLOduration=26.147254057 podStartE2EDuration="56.085901202s" podCreationTimestamp="2024-12-13 01:48:13 +0000 UTC" firstStartedPulling="2024-12-13 01:48:38.445259823 +0000 UTC m=+38.761905878" lastFinishedPulling="2024-12-13 01:49:08.383907008 +0000 UTC m=+68.700553023" observedRunningTime="2024-12-13 01:49:09.085785808 +0000 UTC m=+69.402431863" watchObservedRunningTime="2024-12-13 01:49:09.085901202 +0000 UTC m=+69.402547257" Dec 13 01:49:09.455422 containerd[1449]: time="2024-12-13T01:49:09.455365601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:09.455970 containerd[1449]: time="2024-12-13T01:49:09.455824057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:49:09.456744 containerd[1449]: time="2024-12-13T01:49:09.456716289Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:09.458682 containerd[1449]: time="2024-12-13T01:49:09.458645106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:09.459699 containerd[1449]: time="2024-12-13T01:49:09.459669011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.075559935s" Dec 13 01:49:09.459775 containerd[1449]: time="2024-12-13T01:49:09.459705889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:49:09.462882 containerd[1449]: time="2024-12-13T01:49:09.462838401Z" level=info msg="CreateContainer within sandbox \"e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:49:09.473912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403106215.mount: Deactivated successfully. Dec 13 01:49:09.475043 containerd[1449]: time="2024-12-13T01:49:09.474997071Z" level=info msg="CreateContainer within sandbox \"e33cc7cf3607af5b291c340baf71a7b270b655b4af4725b0a0e9a4e419bbc0d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"28863d8d92187106a70b1cd8900b14fabe4d03f2f619606957d9aaf5b192d0a3\"" Dec 13 01:49:09.475716 containerd[1449]: time="2024-12-13T01:49:09.475687914Z" level=info msg="StartContainer for \"28863d8d92187106a70b1cd8900b14fabe4d03f2f619606957d9aaf5b192d0a3\"" Dec 13 01:49:09.503879 systemd[1]: Started cri-containerd-28863d8d92187106a70b1cd8900b14fabe4d03f2f619606957d9aaf5b192d0a3.scope - libcontainer container 28863d8d92187106a70b1cd8900b14fabe4d03f2f619606957d9aaf5b192d0a3. Dec 13 01:49:09.536032 containerd[1449]: time="2024-12-13T01:49:09.535980769Z" level=info msg="StartContainer for \"28863d8d92187106a70b1cd8900b14fabe4d03f2f619606957d9aaf5b192d0a3\" returns successfully" Dec 13 01:49:09.869088 kubelet[2458]: I1213 01:49:09.868661 2458 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:49:09.875735 kubelet[2458]: I1213 01:49:09.875702 2458 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:49:10.088342 kubelet[2458]: I1213 01:49:10.088287 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tv6tv" podStartSLOduration=25.949649157 podStartE2EDuration="57.088270723s" podCreationTimestamp="2024-12-13 01:48:13 +0000 UTC" firstStartedPulling="2024-12-13 01:48:38.32170333 +0000 UTC m=+38.638349345" lastFinishedPulling="2024-12-13 01:49:09.460324856 +0000 UTC m=+69.776970911" observedRunningTime="2024-12-13 01:49:10.087888143 +0000 UTC m=+70.404534198" watchObservedRunningTime="2024-12-13 01:49:10.088270723 +0000 UTC m=+70.404916778" Dec 13 01:49:10.441495 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:58064.service - OpenSSH per-connection server daemon (10.0.0.1:58064). Dec 13 01:49:10.487806 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 58064 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:10.491028 sshd[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:10.494735 systemd-logind[1427]: New session 16 of user core. Dec 13 01:49:10.505776 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:49:10.715963 sshd[5377]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:10.725384 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:58064.service: Deactivated successfully. Dec 13 01:49:10.728172 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:49:10.729006 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:49:10.736895 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:58066.service - OpenSSH per-connection server daemon (10.0.0.1:58066). Dec 13 01:49:10.738380 systemd-logind[1427]: Removed session 16. Dec 13 01:49:10.768326 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 58066 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:10.769780 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:10.774028 systemd-logind[1427]: New session 17 of user core. Dec 13 01:49:10.780794 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:49:10.984292 sshd[5391]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:10.994952 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:58066.service: Deactivated successfully. Dec 13 01:49:10.997834 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:49:10.999062 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:49:11.000238 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:58070.service - OpenSSH per-connection server daemon (10.0.0.1:58070). Dec 13 01:49:11.002109 systemd-logind[1427]: Removed session 17. Dec 13 01:49:11.038222 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 58070 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:11.039380 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:11.043420 systemd-logind[1427]: New session 18 of user core. Dec 13 01:49:11.054718 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:49:12.557645 sshd[5403]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:12.566689 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:58070.service: Deactivated successfully. Dec 13 01:49:12.568267 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:49:12.569126 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:49:12.577897 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:41830.service - OpenSSH per-connection server daemon (10.0.0.1:41830). Dec 13 01:49:12.581000 systemd-logind[1427]: Removed session 18. Dec 13 01:49:12.624680 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 41830 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:12.626067 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:12.632015 systemd-logind[1427]: New session 19 of user core. Dec 13 01:49:12.643803 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:49:13.008688 sshd[5425]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:13.016781 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:41830.service: Deactivated successfully. Dec 13 01:49:13.020779 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:49:13.021983 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:49:13.028851 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:41838.service - OpenSSH per-connection server daemon (10.0.0.1:41838). Dec 13 01:49:13.029889 systemd-logind[1427]: Removed session 19. Dec 13 01:49:13.061482 sshd[5438]: Accepted publickey for core from 10.0.0.1 port 41838 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:13.061999 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:13.065689 systemd-logind[1427]: New session 20 of user core. Dec 13 01:49:13.073731 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:49:13.208813 sshd[5438]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:13.211330 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:41838.service: Deactivated successfully. Dec 13 01:49:13.212993 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:49:13.214134 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:49:13.215288 systemd-logind[1427]: Removed session 20. Dec 13 01:49:18.219263 systemd[1]: Started sshd@20-10.0.0.141:22-10.0.0.1:41854.service - OpenSSH per-connection server daemon (10.0.0.1:41854). Dec 13 01:49:18.254320 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 41854 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:18.255650 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:18.259579 systemd-logind[1427]: New session 21 of user core. Dec 13 01:49:18.266735 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:49:18.400931 sshd[5455]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:18.404389 systemd[1]: sshd@20-10.0.0.141:22-10.0.0.1:41854.service: Deactivated successfully. Dec 13 01:49:18.406629 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:49:18.407363 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:49:18.408340 systemd-logind[1427]: Removed session 21. Dec 13 01:49:21.764434 kubelet[2458]: E1213 01:49:21.764377 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:49:23.412520 systemd[1]: Started sshd@21-10.0.0.141:22-10.0.0.1:33840.service - OpenSSH per-connection server daemon (10.0.0.1:33840). Dec 13 01:49:23.448977 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 33840 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:23.450389 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:23.454147 systemd-logind[1427]: New session 22 of user core. Dec 13 01:49:23.463827 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:49:23.572509 sshd[5492]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:23.576117 systemd[1]: sshd@21-10.0.0.141:22-10.0.0.1:33840.service: Deactivated successfully. Dec 13 01:49:23.577945 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:49:23.578562 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:49:23.579497 systemd-logind[1427]: Removed session 22. Dec 13 01:49:28.583397 systemd[1]: Started sshd@22-10.0.0.141:22-10.0.0.1:33852.service - OpenSSH per-connection server daemon (10.0.0.1:33852). Dec 13 01:49:28.627452 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 33852 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:49:28.628975 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:49:28.636048 systemd-logind[1427]: New session 23 of user core. Dec 13 01:49:28.648149 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:49:28.830228 sshd[5509]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:28.834540 systemd[1]: sshd@22-10.0.0.141:22-10.0.0.1:33852.service: Deactivated successfully. Dec 13 01:49:28.836337 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:49:28.837351 systemd-logind[1427]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:49:28.838157 systemd-logind[1427]: Removed session 23.