Oct 8 19:57:53.911946 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:57:53.911972 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 18:25:39 -00 2024 Oct 8 19:57:53.911983 kernel: KASLR enabled Oct 8 19:57:53.911989 kernel: efi: EFI v2.7 by EDK II Oct 8 19:57:53.911994 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 19:57:53.912000 kernel: random: crng init done Oct 8 19:57:53.912007 kernel: ACPI: Early table checksum verification disabled Oct 8 19:57:53.912013 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 19:57:53.912020 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:57:53.912028 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912034 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912041 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912047 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912052 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912060 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912068 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912080 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912086 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:57:53.912092 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 19:57:53.912101 kernel: NUMA: Failed to initialise from firmware Oct 8 19:57:53.912108 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:57:53.912114 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Oct 8 19:57:53.912120 kernel: Zone ranges: Oct 8 19:57:53.912127 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:57:53.912133 kernel: DMA32 empty Oct 8 19:57:53.912140 kernel: Normal empty Oct 8 19:57:53.912147 kernel: Movable zone start for each node Oct 8 19:57:53.912153 kernel: Early memory node ranges Oct 8 19:57:53.912159 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 19:57:53.912166 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 19:57:53.912172 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 19:57:53.912178 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 19:57:53.912185 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 19:57:53.912191 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 19:57:53.912198 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 19:57:53.912204 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:57:53.912211 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 19:57:53.912219 kernel: psci: probing for conduit method from ACPI. Oct 8 19:57:53.912225 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:57:53.912232 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:57:53.912246 kernel: psci: Trusted OS migration not required Oct 8 19:57:53.912252 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:57:53.912266 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:57:53.912275 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:57:53.912282 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:57:53.912290 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 19:57:53.912296 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:57:53.912303 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:57:53.912310 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:57:53.912317 kernel: CPU features: detected: Spectre-v4 Oct 8 19:57:53.912324 kernel: CPU features: detected: Spectre-BHB Oct 8 19:57:53.912331 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:57:53.912338 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:57:53.912346 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:57:53.912353 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:57:53.912360 kernel: alternatives: applying boot alternatives Oct 8 19:57:53.912367 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 19:57:53.912374 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:57:53.912381 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:57:53.912388 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:57:53.912396 kernel: Fallback order for Node 0: 0 Oct 8 19:57:53.912403 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 19:57:53.912410 kernel: Policy zone: DMA Oct 8 19:57:53.912417 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:57:53.912425 kernel: software IO TLB: area num 4. Oct 8 19:57:53.912432 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 19:57:53.912439 kernel: Memory: 2386464K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39360K init, 897K bss, 185824K reserved, 0K cma-reserved) Oct 8 19:57:53.912446 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:57:53.912453 kernel: trace event string verifier disabled Oct 8 19:57:53.912460 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:57:53.912467 kernel: rcu: RCU event tracing is enabled. Oct 8 19:57:53.912474 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:57:53.912481 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:57:53.912488 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:57:53.912495 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:57:53.912502 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:57:53.912510 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:57:53.912516 kernel: GICv3: 256 SPIs implemented Oct 8 19:57:53.912523 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:57:53.912530 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:57:53.912536 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:57:53.912543 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:57:53.912550 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:57:53.912557 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:57:53.912564 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:57:53.912571 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 19:57:53.912578 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 19:57:53.912586 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:57:53.912593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:57:53.912600 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:57:53.912607 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:57:53.912614 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:57:53.912621 kernel: arm-pv: using stolen time PV Oct 8 19:57:53.912629 kernel: Console: colour dummy device 80x25 Oct 8 19:57:53.912636 kernel: ACPI: Core revision 20230628 Oct 8 19:57:53.912643 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:57:53.912650 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:57:53.912658 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:57:53.912665 kernel: landlock: Up and running. Oct 8 19:57:53.912672 kernel: SELinux: Initializing. Oct 8 19:57:53.912679 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:57:53.912686 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:57:53.912694 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:57:53.912701 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:57:53.912707 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:57:53.912714 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:57:53.912723 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:57:53.912730 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:57:53.912737 kernel: Remapping and enabling EFI services. Oct 8 19:57:53.912743 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:57:53.912750 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:57:53.912757 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:57:53.912764 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 19:57:53.912771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:57:53.912779 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:57:53.912799 kernel: Detected PIPT I-cache on CPU2 Oct 8 19:57:53.912809 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 19:57:53.912817 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 19:57:53.912829 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:57:53.912837 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 19:57:53.912844 kernel: Detected PIPT I-cache on CPU3 Oct 8 19:57:53.912852 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 19:57:53.912859 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 19:57:53.912879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:57:53.912887 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 19:57:53.912899 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:57:53.912907 kernel: SMP: Total of 4 processors activated. Oct 8 19:57:53.912914 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:57:53.912921 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:57:53.912939 kernel: CPU features: detected: Common not Private translations Oct 8 19:57:53.912946 kernel: CPU features: detected: CRC32 instructions Oct 8 19:57:53.912955 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:57:53.912966 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:57:53.912975 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:57:53.912982 kernel: CPU features: detected: Privileged Access Never Oct 8 19:57:53.912990 kernel: CPU features: detected: RAS Extension Support Oct 8 19:57:53.912997 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:57:53.913004 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:57:53.913012 kernel: alternatives: applying system-wide alternatives Oct 8 19:57:53.913019 kernel: devtmpfs: initialized Oct 8 19:57:53.913026 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:57:53.913034 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:57:53.913043 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:57:53.913050 kernel: SMBIOS 3.0.0 present. Oct 8 19:57:53.913059 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 19:57:53.913067 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:57:53.913074 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:57:53.913081 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:57:53.913089 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:57:53.913096 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:57:53.913104 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Oct 8 19:57:53.913113 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:57:53.913121 kernel: cpuidle: using governor menu Oct 8 19:57:53.913128 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:57:53.913136 kernel: ASID allocator initialised with 32768 entries Oct 8 19:57:53.913143 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:57:53.913151 kernel: Serial: AMBA PL011 UART driver Oct 8 19:57:53.913158 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:57:53.913166 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:57:53.913173 kernel: Modules: 509024 pages in range for PLT usage Oct 8 19:57:53.913182 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:57:53.913190 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:57:53.913198 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:57:53.913205 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:57:53.913212 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:57:53.913219 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:57:53.913227 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:57:53.913235 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:57:53.913262 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:57:53.913294 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:57:53.913304 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:57:53.913311 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:57:53.913319 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:57:53.913326 kernel: ACPI: Interpreter enabled Oct 8 19:57:53.913333 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:57:53.913341 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:57:53.913348 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:57:53.913356 kernel: printk: console [ttyAMA0] enabled Oct 8 19:57:53.913365 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:57:53.913528 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:57:53.913608 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:57:53.913689 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:57:53.913756 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:57:53.913822 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:57:53.913832 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:57:53.913843 kernel: PCI host bridge to bus 0000:00 Oct 8 19:57:53.913938 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:57:53.914009 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:57:53.914073 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:57:53.914132 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:57:53.914214 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:57:53.914304 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:57:53.914381 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 19:57:53.914450 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 19:57:53.914518 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:57:53.914584 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:57:53.914651 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 19:57:53.914718 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 19:57:53.914779 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:57:53.914840 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:57:53.915058 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:57:53.915073 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:57:53.915081 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:57:53.915089 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:57:53.915096 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:57:53.915104 kernel: iommu: Default domain type: Translated Oct 8 19:57:53.915112 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:57:53.915124 kernel: efivars: Registered efivars operations Oct 8 19:57:53.915131 kernel: vgaarb: loaded Oct 8 19:57:53.915139 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:57:53.915147 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:57:53.915154 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:57:53.915162 kernel: pnp: PnP ACPI init Oct 8 19:57:53.915242 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:57:53.915253 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:57:53.915272 kernel: NET: Registered PF_INET protocol family Oct 8 19:57:53.915280 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:57:53.915288 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:57:53.915295 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:57:53.915303 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:57:53.915311 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:57:53.915318 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:57:53.915326 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:57:53.915334 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:57:53.915343 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:57:53.915351 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:57:53.915358 kernel: kvm [1]: HYP mode not available Oct 8 19:57:53.915365 kernel: Initialise system trusted keyrings Oct 8 19:57:53.915373 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:57:53.915381 kernel: Key type asymmetric registered Oct 8 19:57:53.915389 kernel: Asymmetric key parser 'x509' registered Oct 8 19:57:53.915396 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:57:53.915404 kernel: io scheduler mq-deadline registered Oct 8 19:57:53.915413 kernel: io scheduler kyber registered Oct 8 19:57:53.915421 kernel: io scheduler bfq registered Oct 8 19:57:53.915428 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:57:53.915436 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:57:53.915444 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:57:53.915519 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 19:57:53.915529 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:57:53.915537 kernel: thunder_xcv, ver 1.0 Oct 8 19:57:53.915545 kernel: thunder_bgx, ver 1.0 Oct 8 19:57:53.915554 kernel: nicpf, ver 1.0 Oct 8 19:57:53.915562 kernel: nicvf, ver 1.0 Oct 8 19:57:53.915649 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:57:53.915712 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:57:53 UTC (1728417473) Oct 8 19:57:53.915723 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:57:53.915731 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:57:53.915739 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:57:53.915746 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:57:53.915756 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:57:53.915764 kernel: Segment Routing with IPv6 Oct 8 19:57:53.915771 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:57:53.915778 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:57:53.915786 kernel: Key type dns_resolver registered Oct 8 19:57:53.915793 kernel: registered taskstats version 1 Oct 8 19:57:53.915800 kernel: Loading compiled-in X.509 certificates Oct 8 19:57:53.915808 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e9e638352c282bfddf5aec6da700ad8191939d05' Oct 8 19:57:53.915816 kernel: Key type .fscrypt registered Oct 8 19:57:53.915825 kernel: Key type fscrypt-provisioning registered Oct 8 19:57:53.915833 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:57:53.915840 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:57:53.915848 kernel: ima: No architecture policies found Oct 8 19:57:53.915855 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:57:53.915874 kernel: clk: Disabling unused clocks Oct 8 19:57:53.915883 kernel: Freeing unused kernel memory: 39360K Oct 8 19:57:53.915891 kernel: Run /init as init process Oct 8 19:57:53.915898 kernel: with arguments: Oct 8 19:57:53.915912 kernel: /init Oct 8 19:57:53.915928 kernel: with environment: Oct 8 19:57:53.915937 kernel: HOME=/ Oct 8 19:57:53.915944 kernel: TERM=linux Oct 8 19:57:53.915951 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:57:53.915962 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:57:53.915971 systemd[1]: Detected virtualization kvm. Oct 8 19:57:53.915979 systemd[1]: Detected architecture arm64. Oct 8 19:57:53.915989 systemd[1]: Running in initrd. Oct 8 19:57:53.915996 systemd[1]: No hostname configured, using default hostname. Oct 8 19:57:53.916004 systemd[1]: Hostname set to . Oct 8 19:57:53.916012 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:57:53.916020 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:57:53.916028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:57:53.916036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:57:53.916044 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:57:53.916054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:57:53.916062 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:57:53.916070 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:57:53.916080 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:57:53.916088 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:57:53.916097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:57:53.916105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:57:53.916114 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:57:53.916122 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:57:53.916130 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:57:53.916138 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:57:53.916146 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:57:53.916155 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:57:53.916164 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:57:53.916171 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:57:53.916181 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:57:53.916189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:57:53.916197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:57:53.916207 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:57:53.916221 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:57:53.916229 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:57:53.916237 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:57:53.916245 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:57:53.916252 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:57:53.916269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:57:53.916277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:53.916285 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:57:53.916293 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:57:53.916301 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:57:53.916309 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:57:53.916320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:53.916328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:57:53.916361 systemd-journald[238]: Collecting audit messages is disabled. Oct 8 19:57:53.916384 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:57:53.916393 systemd-journald[238]: Journal started Oct 8 19:57:53.916413 systemd-journald[238]: Runtime Journal (/run/log/journal/687343db72134514a3d8608b10c25dd4) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:57:53.925993 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:57:53.926030 kernel: Bridge firewalling registered Oct 8 19:57:53.908398 systemd-modules-load[239]: Inserted module 'overlay' Oct 8 19:57:53.925451 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 8 19:57:53.930924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:57:53.930952 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:57:53.933012 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:57:53.943081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:57:53.945470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:57:53.946560 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:57:53.949693 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:57:53.952545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:57:53.955610 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:57:53.959764 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:57:53.962625 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:57:53.974498 dracut-cmdline[276]: dracut-dracut-053 Oct 8 19:57:53.977389 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 19:57:53.995362 systemd-resolved[279]: Positive Trust Anchors: Oct 8 19:57:53.995382 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:57:53.995413 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:57:54.000413 systemd-resolved[279]: Defaulting to hostname 'linux'. Oct 8 19:57:54.003502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:57:54.004487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:57:54.049916 kernel: SCSI subsystem initialized Oct 8 19:57:54.054897 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:57:54.062899 kernel: iscsi: registered transport (tcp) Oct 8 19:57:54.076928 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:57:54.076989 kernel: QLogic iSCSI HBA Driver Oct 8 19:57:54.122304 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:57:54.140050 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:57:54.157522 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:57:54.157587 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:57:54.158659 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:57:54.204904 kernel: raid6: neonx8 gen() 15770 MB/s Oct 8 19:57:54.221893 kernel: raid6: neonx4 gen() 15632 MB/s Oct 8 19:57:54.238890 kernel: raid6: neonx2 gen() 13255 MB/s Oct 8 19:57:54.255884 kernel: raid6: neonx1 gen() 10502 MB/s Oct 8 19:57:54.272884 kernel: raid6: int64x8 gen() 6962 MB/s Oct 8 19:57:54.289888 kernel: raid6: int64x4 gen() 7335 MB/s Oct 8 19:57:54.306885 kernel: raid6: int64x2 gen() 6127 MB/s Oct 8 19:57:54.324003 kernel: raid6: int64x1 gen() 5055 MB/s Oct 8 19:57:54.324022 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Oct 8 19:57:54.342021 kernel: raid6: .... xor() 11925 MB/s, rmw enabled Oct 8 19:57:54.342035 kernel: raid6: using neon recovery algorithm Oct 8 19:57:54.346884 kernel: xor: measuring software checksum speed Oct 8 19:57:54.348228 kernel: 8regs : 17286 MB/sec Oct 8 19:57:54.348241 kernel: 32regs : 19650 MB/sec Oct 8 19:57:54.348910 kernel: arm64_neon : 26778 MB/sec Oct 8 19:57:54.348923 kernel: xor: using function: arm64_neon (26778 MB/sec) Oct 8 19:57:54.400891 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:57:54.412546 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:57:54.427088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:57:54.439040 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 8 19:57:54.442203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:57:54.452052 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:57:54.464015 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Oct 8 19:57:54.494481 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:57:54.508087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:57:54.550248 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:57:54.560400 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:57:54.573994 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:57:54.575368 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:57:54.576803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:57:54.579743 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:57:54.589301 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:57:54.599820 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:57:54.608842 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 19:57:54.609440 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:57:54.611328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:57:54.611378 kernel: GPT:9289727 != 19775487 Oct 8 19:57:54.613226 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:57:54.613274 kernel: GPT:9289727 != 19775487 Oct 8 19:57:54.613293 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:57:54.613947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:57:54.614432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:57:54.614565 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:57:54.617137 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:57:54.617966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:57:54.618145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:54.619754 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:54.625141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:54.639469 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522) Oct 8 19:57:54.639526 kernel: BTRFS: device fsid ad786f33-c7c5-429e-95f9-4ea457bd3916 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (520) Oct 8 19:57:54.639186 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:57:54.640685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:54.647586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:57:54.659386 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:57:54.663884 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:57:54.664960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:57:54.679039 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:57:54.680898 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:57:54.686892 disk-uuid[550]: Primary Header is updated. Oct 8 19:57:54.686892 disk-uuid[550]: Secondary Entries is updated. Oct 8 19:57:54.686892 disk-uuid[550]: Secondary Header is updated. Oct 8 19:57:54.689888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:57:54.707023 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:57:55.703661 disk-uuid[551]: The operation has completed successfully. Oct 8 19:57:55.704744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:57:55.731998 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:57:55.732102 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:57:55.748056 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:57:55.751057 sh[575]: Success Oct 8 19:57:55.764904 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:57:55.793028 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:57:55.806310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:57:55.808311 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:57:55.817895 kernel: BTRFS info (device dm-0): first mount of filesystem ad786f33-c7c5-429e-95f9-4ea457bd3916 Oct 8 19:57:55.817931 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:57:55.817948 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:57:55.819418 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:57:55.819447 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:57:55.823125 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:57:55.824923 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:57:55.838038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:57:55.839393 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:57:55.846551 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:57:55.846589 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:57:55.846601 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:57:55.849907 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:57:55.856570 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:57:55.858281 kernel: BTRFS info (device vda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:57:55.864318 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:57:55.872045 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:57:55.930938 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:57:55.941039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:57:55.966123 systemd-networkd[768]: lo: Link UP Oct 8 19:57:55.966134 systemd-networkd[768]: lo: Gained carrier Oct 8 19:57:55.966793 systemd-networkd[768]: Enumeration completed Oct 8 19:57:55.966882 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:57:55.967275 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:55.967279 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:57:55.969413 systemd[1]: Reached target network.target - Network. Oct 8 19:57:55.970954 systemd-networkd[768]: eth0: Link UP Oct 8 19:57:55.973341 ignition[668]: Ignition 2.19.0 Oct 8 19:57:55.970958 systemd-networkd[768]: eth0: Gained carrier Oct 8 19:57:55.973347 ignition[668]: Stage: fetch-offline Oct 8 19:57:55.970966 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:55.973379 ignition[668]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:55.973387 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:57:55.973539 ignition[668]: parsed url from cmdline: "" Oct 8 19:57:55.973543 ignition[668]: no config URL provided Oct 8 19:57:55.973547 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:57:55.973554 ignition[668]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:57:55.973574 ignition[668]: op(1): [started] loading QEMU firmware config module Oct 8 19:57:55.973579 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:57:55.980577 ignition[668]: op(1): [finished] loading QEMU firmware config module Oct 8 19:57:55.991914 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:57:56.021012 ignition[668]: parsing config with SHA512: 4f595cfc5fea1a0bb9acfc8024b081d3f5a1ac3cbe9760a142333345e05ccfd2fbde3560e4ad9bb81ea289e411bc3380f6c9ada43c4175ef23833a901096e131 Oct 8 19:57:56.025628 unknown[668]: fetched base config from "system" Oct 8 19:57:56.025646 unknown[668]: fetched user config from "qemu" Oct 8 19:57:56.027532 ignition[668]: fetch-offline: fetch-offline passed Oct 8 19:57:56.027641 ignition[668]: Ignition finished successfully Oct 8 19:57:56.029803 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:57:56.031677 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:57:56.044241 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:57:56.054926 ignition[774]: Ignition 2.19.0 Oct 8 19:57:56.054938 ignition[774]: Stage: kargs Oct 8 19:57:56.055243 ignition[774]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:56.055264 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:57:56.056182 ignition[774]: kargs: kargs passed Oct 8 19:57:56.056231 ignition[774]: Ignition finished successfully Oct 8 19:57:56.060432 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:57:56.074062 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:57:56.084049 ignition[782]: Ignition 2.19.0 Oct 8 19:57:56.084060 ignition[782]: Stage: disks Oct 8 19:57:56.084232 ignition[782]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:56.084242 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:57:56.085136 ignition[782]: disks: disks passed Oct 8 19:57:56.086615 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:57:56.085184 ignition[782]: Ignition finished successfully Oct 8 19:57:56.087945 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:57:56.089029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:57:56.090284 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:57:56.091607 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:57:56.093047 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:57:56.103059 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:57:56.111800 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.130 Oct 8 19:57:56.111811 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Oct 8 19:57:56.114185 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:57:56.118169 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:57:56.131088 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:57:56.177895 kernel: EXT4-fs (vda9): mounted filesystem 833c86f3-93dd-4526-bb43-c7809dac8e51 r/w with ordered data mode. Quota mode: none. Oct 8 19:57:56.178533 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:57:56.179651 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:57:56.195972 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:57:56.198014 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:57:56.198855 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:57:56.198922 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:57:56.198947 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:57:56.205982 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Oct 8 19:57:56.208582 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:57:56.208641 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:57:56.208653 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:57:56.208534 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:57:56.212239 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:57:56.216039 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:57:56.217981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:57:56.255822 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:57:56.260531 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:57:56.264317 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:57:56.268336 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:57:56.346753 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:57:56.360046 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:57:56.361686 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:57:56.366886 kernel: BTRFS info (device vda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:57:56.381550 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:57:56.386457 ignition[916]: INFO : Ignition 2.19.0 Oct 8 19:57:56.386457 ignition[916]: INFO : Stage: mount Oct 8 19:57:56.387735 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:56.387735 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:57:56.387735 ignition[916]: INFO : mount: mount passed Oct 8 19:57:56.387735 ignition[916]: INFO : Ignition finished successfully Oct 8 19:57:56.389222 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:57:56.403023 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:57:56.816704 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:57:56.829320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:57:56.845473 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Oct 8 19:57:56.845530 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:57:56.845542 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:57:56.847229 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:57:56.850889 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:57:56.853396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:57:56.893520 ignition[947]: INFO : Ignition 2.19.0 Oct 8 19:57:56.893520 ignition[947]: INFO : Stage: files Oct 8 19:57:56.894829 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:56.894829 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:57:56.894829 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:57:56.897846 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:57:56.897846 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:57:56.897846 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:57:56.900850 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:57:56.900850 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:57:56.900850 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:57:56.900850 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:57:56.899365 unknown[947]: wrote ssh authorized keys file for user: core Oct 8 19:57:56.978036 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:57:57.266972 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:57:57.266972 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:57:57.270220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Oct 8 19:57:57.561488 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:57:57.671494 systemd-networkd[768]: eth0: Gained IPv6LL Oct 8 19:57:57.836550 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:57:57.836550 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:57:57.839514 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:57:57.841035 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:57:57.841035 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:57:57.841035 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:57:57.841035 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:57:57.841035 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:57:57.841035 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:57:57.841035 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:57:57.866345 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:57:57.870360 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:57:57.871486 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:57:57.871486 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:57:57.871486 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:57:57.871486 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:57:57.871486 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:57:57.871486 ignition[947]: INFO : files: files passed Oct 8 19:57:57.871486 ignition[947]: INFO : Ignition finished successfully Oct 8 19:57:57.872602 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:57:57.881083 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:57:57.883347 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:57:57.885536 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:57:57.885665 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:57:57.891560 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:57:57.895101 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:57:57.895101 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:57:57.897355 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:57:57.899318 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:57:57.900659 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:57:57.914069 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:57:57.934371 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:57:57.935093 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:57:57.936197 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:57:57.937587 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:57:57.938977 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:57:57.939786 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:57:57.955544 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:57:57.964038 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:57:57.971986 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:57:57.973001 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:57:57.974630 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:57:57.976145 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:57:57.976283 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:57:57.978220 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:57:57.979737 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:57:57.981042 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:57:57.982454 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:57:57.983878 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:57:57.985595 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:57:57.986932 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:57:57.988427 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:57:57.989834 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:57:57.991147 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:57:57.992331 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:57:57.992459 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:57:57.994437 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:57:57.995927 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:57:57.997436 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:57:57.998856 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:57:58.000688 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:57:58.000818 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:57:58.002787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:57:58.002924 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:57:58.004582 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:57:58.005809 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:57:58.009907 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:57:58.010885 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:57:58.012679 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:57:58.013925 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:57:58.014019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:57:58.015324 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:57:58.015415 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:57:58.016587 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:57:58.016700 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:57:58.017969 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:57:58.018071 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:57:58.031059 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:57:58.031803 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:57:58.031955 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:57:58.034141 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:57:58.035432 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:57:58.035558 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:57:58.036908 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:57:58.037009 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:57:58.041911 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:57:58.042000 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:57:58.045170 ignition[1001]: INFO : Ignition 2.19.0 Oct 8 19:57:58.045170 ignition[1001]: INFO : Stage: umount Oct 8 19:57:58.045170 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:58.045170 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:57:58.049450 ignition[1001]: INFO : umount: umount passed Oct 8 19:57:58.049450 ignition[1001]: INFO : Ignition finished successfully Oct 8 19:57:58.048046 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:57:58.049906 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:57:58.052012 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:57:58.052357 systemd[1]: Stopped target network.target - Network. Oct 8 19:57:58.053592 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:57:58.053645 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:57:58.055016 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:57:58.055060 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:57:58.056405 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:57:58.056444 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:57:58.057747 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:57:58.057786 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:57:58.059620 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:57:58.060977 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:57:58.067717 systemd-networkd[768]: eth0: DHCPv6 lease lost Oct 8 19:57:58.070058 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:57:58.070177 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:57:58.072803 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:57:58.072992 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:57:58.075287 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:57:58.075381 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:57:58.089006 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:57:58.089684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:57:58.089749 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:57:58.091434 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:57:58.091478 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:57:58.092819 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:57:58.092858 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:57:58.094704 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:57:58.094750 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:57:58.096349 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:57:58.111210 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:57:58.111344 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:57:58.113123 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:57:58.113269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:57:58.114705 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:57:58.115897 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:57:58.117828 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:57:58.117896 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:57:58.119008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:57:58.119039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:57:58.120448 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:57:58.120491 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:57:58.122730 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:57:58.122771 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:57:58.124999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:57:58.125040 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:57:58.127383 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:57:58.127427 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:57:58.140010 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:57:58.140832 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:57:58.140906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:57:58.142760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:57:58.142801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:58.147609 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:57:58.147705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:57:58.149520 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:57:58.152459 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:57:58.161125 systemd[1]: Switching root. Oct 8 19:57:58.189438 systemd-journald[238]: Journal stopped Oct 8 19:57:58.864285 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 8 19:57:58.864341 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:57:58.864353 kernel: SELinux: policy capability open_perms=1 Oct 8 19:57:58.864363 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:57:58.864372 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:57:58.864382 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:57:58.864392 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:57:58.864404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:57:58.864414 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:57:58.864424 kernel: audit: type=1403 audit(1728417478.324:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:57:58.864434 systemd[1]: Successfully loaded SELinux policy in 31.430ms. Oct 8 19:57:58.864454 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.806ms. Oct 8 19:57:58.864465 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:57:58.864476 systemd[1]: Detected virtualization kvm. Oct 8 19:57:58.864487 systemd[1]: Detected architecture arm64. Oct 8 19:57:58.864497 systemd[1]: Detected first boot. Oct 8 19:57:58.864509 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:57:58.864520 zram_generator::config[1049]: No configuration found. Oct 8 19:57:58.864531 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:57:58.864541 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:57:58.864555 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:57:58.864568 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:57:58.864580 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:57:58.864591 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:57:58.864603 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:57:58.864613 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:57:58.864624 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:57:58.864635 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:57:58.864646 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:57:58.864656 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:57:58.864666 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:57:58.864677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:57:58.864688 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:57:58.864699 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:57:58.864710 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:57:58.864720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:57:58.864731 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:57:58.864741 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:57:58.864751 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:57:58.864762 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:57:58.864773 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:57:58.864785 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:57:58.864810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:57:58.864822 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:57:58.864833 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:57:58.864844 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:57:58.864854 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:57:58.865333 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:57:58.865356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:57:58.865367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:57:58.865382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:57:58.865393 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:57:58.865425 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:57:58.865439 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:57:58.865450 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:57:58.865461 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:57:58.865471 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:57:58.865482 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:57:58.865493 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:57:58.865506 systemd[1]: Reached target machines.target - Containers. Oct 8 19:57:58.865516 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:57:58.865528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:58.865540 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:57:58.865552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:57:58.865562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:58.865572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:57:58.865583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:58.865594 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:57:58.865605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:58.865615 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:57:58.865626 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:57:58.865636 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:57:58.865647 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:57:58.865662 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:57:58.865672 kernel: fuse: init (API version 7.39) Oct 8 19:57:58.865682 kernel: loop: module loaded Oct 8 19:57:58.865694 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:57:58.865704 kernel: ACPI: bus type drm_connector registered Oct 8 19:57:58.865715 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:57:58.865725 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:57:58.865736 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:57:58.865766 systemd-journald[1116]: Collecting audit messages is disabled. Oct 8 19:57:58.865788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:57:58.865800 systemd-journald[1116]: Journal started Oct 8 19:57:58.865823 systemd-journald[1116]: Runtime Journal (/run/log/journal/687343db72134514a3d8608b10c25dd4) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:57:58.677049 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:57:58.690427 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:57:58.690768 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:57:58.867448 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:57:58.867473 systemd[1]: Stopped verity-setup.service. Oct 8 19:57:58.871318 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:57:58.871897 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:57:58.872833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:57:58.873835 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:57:58.874820 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:57:58.875722 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:57:58.876627 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:57:58.878931 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:57:58.879987 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:57:58.881090 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:57:58.881222 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:57:58.882280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:58.882411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:58.883492 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:57:58.883617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:57:58.884612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:58.884746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:58.885892 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:57:58.886029 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:57:58.887150 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:58.887309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:58.888383 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:57:58.889415 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:57:58.890515 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:57:58.901958 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:57:58.911975 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:57:58.913701 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:57:58.914551 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:57:58.914585 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:57:58.916214 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:57:58.918125 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:57:58.919826 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:57:58.920720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:58.924029 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:57:58.925677 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:57:58.926564 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:57:58.930035 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:57:58.931189 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:57:58.933035 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:57:58.933690 systemd-journald[1116]: Time spent on flushing to /var/log/journal/687343db72134514a3d8608b10c25dd4 is 22.407ms for 855 entries. Oct 8 19:57:58.933690 systemd-journald[1116]: System Journal (/var/log/journal/687343db72134514a3d8608b10c25dd4) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:57:58.963544 systemd-journald[1116]: Received client request to flush runtime journal. Oct 8 19:57:58.963590 kernel: loop0: detected capacity change from 0 to 114328 Oct 8 19:57:58.938952 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:57:58.940656 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:57:58.942775 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:57:58.945148 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:57:58.947150 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:57:58.948370 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:57:58.950891 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:57:58.954763 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:57:58.964020 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:57:58.966540 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:57:58.968183 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:57:58.977960 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:57:58.982807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:57:58.984268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:57:58.989885 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:57:58.990840 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:57:59.011683 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Oct 8 19:57:59.011700 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Oct 8 19:57:59.015019 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:57:59.016195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:57:59.017562 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:57:59.040889 kernel: loop1: detected capacity change from 0 to 189592 Oct 8 19:57:59.090917 kernel: loop2: detected capacity change from 0 to 114432 Oct 8 19:57:59.124899 kernel: loop3: detected capacity change from 0 to 114328 Oct 8 19:57:59.128980 kernel: loop4: detected capacity change from 0 to 189592 Oct 8 19:57:59.135905 kernel: loop5: detected capacity change from 0 to 114432 Oct 8 19:57:59.138943 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:57:59.139325 (sd-merge)[1184]: Merged extensions into '/usr'. Oct 8 19:57:59.142608 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:57:59.142624 systemd[1]: Reloading... Oct 8 19:57:59.191905 zram_generator::config[1208]: No configuration found. Oct 8 19:57:59.202031 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:57:59.294163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:59.330702 systemd[1]: Reloading finished in 187 ms. Oct 8 19:57:59.358953 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:57:59.360278 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:57:59.374128 systemd[1]: Starting ensure-sysext.service... Oct 8 19:57:59.375843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:57:59.386116 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:57:59.386129 systemd[1]: Reloading... Oct 8 19:57:59.401852 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:57:59.402228 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:57:59.402894 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:57:59.403116 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Oct 8 19:57:59.403173 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Oct 8 19:57:59.405363 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:57:59.405376 systemd-tmpfiles[1249]: Skipping /boot Oct 8 19:57:59.412228 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:57:59.412253 systemd-tmpfiles[1249]: Skipping /boot Oct 8 19:57:59.438906 zram_generator::config[1277]: No configuration found. Oct 8 19:57:59.516540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:59.552411 systemd[1]: Reloading finished in 165 ms. Oct 8 19:57:59.572716 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:57:59.585378 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:57:59.591293 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:59.593538 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:57:59.595502 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:57:59.600129 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:57:59.605367 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:57:59.610396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:57:59.623730 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:57:59.625286 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:57:59.634010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:59.635790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:59.640820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:59.643787 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:59.644238 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Oct 8 19:57:59.645452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:59.647954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:57:59.650349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:59.650745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:59.661171 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:57:59.664286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:59.665035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:59.666686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:57:59.669933 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:57:59.672271 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:59.672434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:59.674036 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:57:59.676210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:57:59.686973 augenrules[1360]: No rules Oct 8 19:57:59.689263 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:59.692728 systemd[1]: Finished ensure-sysext.service. Oct 8 19:57:59.697916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:59.705144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:59.709196 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:57:59.714093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:59.718924 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1340) Oct 8 19:57:59.719072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:59.723254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:59.723886 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1340) Oct 8 19:57:59.727065 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:57:59.738110 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:57:59.739000 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:57:59.739590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:59.739741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:59.742918 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:57:59.743059 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:57:59.744300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:59.744440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:59.745674 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:59.745968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:59.749879 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1358) Oct 8 19:57:59.751685 systemd-resolved[1316]: Positive Trust Anchors: Oct 8 19:57:59.751704 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:57:59.751738 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:57:59.753638 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 19:57:59.754717 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:57:59.754860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:57:59.770001 systemd-resolved[1316]: Defaulting to hostname 'linux'. Oct 8 19:57:59.774290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:57:59.778561 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:57:59.811124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:57:59.812648 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:57:59.814072 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:57:59.817218 systemd-networkd[1381]: lo: Link UP Oct 8 19:57:59.817227 systemd-networkd[1381]: lo: Gained carrier Oct 8 19:57:59.818509 systemd-networkd[1381]: Enumeration completed Oct 8 19:57:59.822098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:57:59.823120 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:59.823123 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:57:59.823506 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:57:59.824140 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:59.824180 systemd-networkd[1381]: eth0: Link UP Oct 8 19:57:59.824183 systemd-networkd[1381]: eth0: Gained carrier Oct 8 19:57:59.824193 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:59.825013 systemd[1]: Reached target network.target - Network. Oct 8 19:57:59.830009 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:57:59.840130 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:57:59.843131 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Oct 8 19:57:59.843843 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:57:59.843965 systemd-timesyncd[1382]: Initial clock synchronization to Tue 2024-10-08 19:57:59.932017 UTC. Oct 8 19:57:59.846426 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:57:59.855076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:59.865088 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:57:59.867414 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:57:59.891353 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:57:59.894919 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:59.929942 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:57:59.931141 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:57:59.931937 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:57:59.932754 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:57:59.933712 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:57:59.934790 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:57:59.935685 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:57:59.936596 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:57:59.937485 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:57:59.937517 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:57:59.938206 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:57:59.940942 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:57:59.943149 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:57:59.954784 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:57:59.956816 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:57:59.958124 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:57:59.959096 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:57:59.959827 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:57:59.960636 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:57:59.960667 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:57:59.961597 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:57:59.963401 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:57:59.966014 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:57:59.966513 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:57:59.970162 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:57:59.971197 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:57:59.980505 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:57:59.983005 jq[1414]: false Oct 8 19:57:59.982715 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:57:59.986028 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:57:59.988039 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:57:59.990935 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:57:59.994276 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:57:59.994656 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:57:59.995350 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:58:00.000003 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:58:00.001947 extend-filesystems[1415]: Found loop3 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found loop4 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found loop5 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found vda Oct 8 19:58:00.001947 extend-filesystems[1415]: Found vda1 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found vda2 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found vda3 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found usr Oct 8 19:58:00.001947 extend-filesystems[1415]: Found vda4 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found vda6 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found vda7 Oct 8 19:58:00.001947 extend-filesystems[1415]: Found vda9 Oct 8 19:58:00.001947 extend-filesystems[1415]: Checking size of /dev/vda9 Oct 8 19:58:00.005075 dbus-daemon[1413]: [system] SELinux support is enabled Oct 8 19:58:00.002680 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:58:00.014175 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:58:00.022431 jq[1429]: true Oct 8 19:58:00.020426 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:58:00.023134 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:58:00.023520 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:58:00.023661 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:58:00.027159 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:58:00.027503 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:58:00.028752 extend-filesystems[1415]: Resized partition /dev/vda9 Oct 8 19:58:00.052944 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1344) Oct 8 19:58:00.057071 tar[1437]: linux-arm64/helm Oct 8 19:58:00.058091 update_engine[1427]: I20241008 19:58:00.052724 1427 main.cc:92] Flatcar Update Engine starting Oct 8 19:58:00.058304 extend-filesystems[1440]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:58:00.059922 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:58:00.064996 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:58:00.059950 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:58:00.068358 update_engine[1427]: I20241008 19:58:00.065349 1427 update_check_scheduler.cc:74] Next update check in 9m38s Oct 8 19:58:00.066088 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:58:00.066111 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:58:00.069463 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:58:00.072948 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:58:00.075973 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:58:00.077189 jq[1438]: true Oct 8 19:58:00.076272 systemd-logind[1426]: New seat seat0. Oct 8 19:58:00.077572 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:58:00.078954 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:58:00.093163 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:58:00.118958 extend-filesystems[1440]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:58:00.118958 extend-filesystems[1440]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:58:00.118958 extend-filesystems[1440]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:58:00.132001 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Oct 8 19:58:00.129839 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:58:00.130063 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:58:00.152222 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:58:00.152971 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:58:00.154055 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:58:00.158592 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:58:00.282495 containerd[1439]: time="2024-10-08T19:58:00.282368318Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:58:00.312206 containerd[1439]: time="2024-10-08T19:58:00.312137432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:58:00.313775 containerd[1439]: time="2024-10-08T19:58:00.313739743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:58:00.313911 containerd[1439]: time="2024-10-08T19:58:00.313893305Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:58:00.313983 containerd[1439]: time="2024-10-08T19:58:00.313967633Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:58:00.314199 containerd[1439]: time="2024-10-08T19:58:00.314178309Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:58:00.314263 containerd[1439]: time="2024-10-08T19:58:00.314250867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:58:00.314376 containerd[1439]: time="2024-10-08T19:58:00.314357250Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:58:00.314444 containerd[1439]: time="2024-10-08T19:58:00.314430573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:58:00.314675 containerd[1439]: time="2024-10-08T19:58:00.314655084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:58:00.315144 containerd[1439]: time="2024-10-08T19:58:00.314725390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:58:00.315144 containerd[1439]: time="2024-10-08T19:58:00.314757888Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:58:00.315144 containerd[1439]: time="2024-10-08T19:58:00.314801327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:58:00.315144 containerd[1439]: time="2024-10-08T19:58:00.314900269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:58:00.315144 containerd[1439]: time="2024-10-08T19:58:00.315109899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:58:00.315390 containerd[1439]: time="2024-10-08T19:58:00.315370931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:58:00.315453 containerd[1439]: time="2024-10-08T19:58:00.315440553Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:58:00.315587 containerd[1439]: time="2024-10-08T19:58:00.315568777Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:58:00.315691 containerd[1439]: time="2024-10-08T19:58:00.315674436Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:58:00.319287 containerd[1439]: time="2024-10-08T19:58:00.319263687Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:58:00.319393 containerd[1439]: time="2024-10-08T19:58:00.319378075Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:58:00.319490 containerd[1439]: time="2024-10-08T19:58:00.319475368Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:58:00.319550 containerd[1439]: time="2024-10-08T19:58:00.319537389Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:58:00.319605 containerd[1439]: time="2024-10-08T19:58:00.319593175Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:58:00.319793 containerd[1439]: time="2024-10-08T19:58:00.319772358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:58:00.321102 containerd[1439]: time="2024-10-08T19:58:00.320142187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:58:00.321357 containerd[1439]: time="2024-10-08T19:58:00.321327449Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:58:00.321433 containerd[1439]: time="2024-10-08T19:58:00.321414567Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:58:00.321507 containerd[1439]: time="2024-10-08T19:58:00.321492193Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321556827Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321580517Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321598255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321637188Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321653518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321671577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321687303Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321702627Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321725915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321744859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321778524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321798674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321812068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.321896 containerd[1439]: time="2024-10-08T19:58:00.321828759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322279 containerd[1439]: time="2024-10-08T19:58:00.321847381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322332 containerd[1439]: time="2024-10-08T19:58:00.321864274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322505 containerd[1439]: time="2024-10-08T19:58:00.322487493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322568 containerd[1439]: time="2024-10-08T19:58:00.322555949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322940 containerd[1439]: time="2024-10-08T19:58:00.322607391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322940 containerd[1439]: time="2024-10-08T19:58:00.322625691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322940 containerd[1439]: time="2024-10-08T19:58:00.322638522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322940 containerd[1439]: time="2024-10-08T19:58:00.322655294Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:58:00.322940 containerd[1439]: time="2024-10-08T19:58:00.322679144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322940 containerd[1439]: time="2024-10-08T19:58:00.322692055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.322940 containerd[1439]: time="2024-10-08T19:58:00.322703679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:58:00.323580 containerd[1439]: time="2024-10-08T19:58:00.323548433Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:58:00.323659 containerd[1439]: time="2024-10-08T19:58:00.323643555Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:58:00.323793 containerd[1439]: time="2024-10-08T19:58:00.323778858Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:58:00.323848 containerd[1439]: time="2024-10-08T19:58:00.323833719Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:58:00.323916 containerd[1439]: time="2024-10-08T19:58:00.323900686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.323976 containerd[1439]: time="2024-10-08T19:58:00.323965119Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:58:00.324035 containerd[1439]: time="2024-10-08T19:58:00.324022836Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:58:00.325077 containerd[1439]: time="2024-10-08T19:58:00.324073474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:58:00.325125 containerd[1439]: time="2024-10-08T19:58:00.324345245Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:58:00.325125 containerd[1439]: time="2024-10-08T19:58:00.324415671Z" level=info msg="Connect containerd service" Oct 8 19:58:00.325125 containerd[1439]: time="2024-10-08T19:58:00.324446158Z" level=info msg="using legacy CRI server" Oct 8 19:58:00.325125 containerd[1439]: time="2024-10-08T19:58:00.324452795Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:58:00.325125 containerd[1439]: time="2024-10-08T19:58:00.324539189Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:58:00.325533 containerd[1439]: time="2024-10-08T19:58:00.325506938Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:58:00.325924 containerd[1439]: time="2024-10-08T19:58:00.325807064Z" level=info msg="Start subscribing containerd event" Oct 8 19:58:00.325924 containerd[1439]: time="2024-10-08T19:58:00.325882760Z" level=info msg="Start recovering state" Oct 8 19:58:00.326067 containerd[1439]: time="2024-10-08T19:58:00.326049434Z" level=info msg="Start event monitor" Oct 8 19:58:00.326067 containerd[1439]: time="2024-10-08T19:58:00.326070429Z" level=info msg="Start snapshots syncer" Oct 8 19:58:00.326119 containerd[1439]: time="2024-10-08T19:58:00.326080967Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:58:00.326119 containerd[1439]: time="2024-10-08T19:58:00.326090097Z" level=info msg="Start streaming server" Oct 8 19:58:00.326435 containerd[1439]: time="2024-10-08T19:58:00.326415683Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:58:00.326576 containerd[1439]: time="2024-10-08T19:58:00.326550945Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:58:00.330026 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:58:00.331345 containerd[1439]: time="2024-10-08T19:58:00.331311944Z" level=info msg="containerd successfully booted in 0.049889s" Oct 8 19:58:00.452466 tar[1437]: linux-arm64/LICENSE Oct 8 19:58:00.452668 tar[1437]: linux-arm64/README.md Oct 8 19:58:00.464925 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:58:01.134645 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:58:01.154157 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:58:01.167262 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:58:01.173029 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:58:01.173226 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:58:01.175809 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:58:01.188544 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:58:01.192046 systemd-networkd[1381]: eth0: Gained IPv6LL Oct 8 19:58:01.192774 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:58:01.195086 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:58:01.196419 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:58:01.198915 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:58:01.201054 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:58:01.203463 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:58:01.206091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:01.208133 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:58:01.225366 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:58:01.225589 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:58:01.227818 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:58:01.233485 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:58:01.691429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:01.692975 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:58:01.695502 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:58:01.695726 systemd[1]: Startup finished in 578ms (kernel) + 4.614s (initrd) + 3.406s (userspace) = 8.599s. Oct 8 19:58:02.141074 kubelet[1526]: E1008 19:58:02.140798 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:58:02.143748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:58:02.143917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:58:06.175779 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:58:06.176946 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:42904.service - OpenSSH per-connection server daemon (10.0.0.1:42904). Oct 8 19:58:06.241078 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 42904 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:58:06.243066 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:06.253058 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:58:06.263160 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:58:06.265014 systemd-logind[1426]: New session 1 of user core. Oct 8 19:58:06.273917 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:58:06.290175 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:58:06.292587 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:58:06.368410 systemd[1543]: Queued start job for default target default.target. Oct 8 19:58:06.377989 systemd[1543]: Created slice app.slice - User Application Slice. Oct 8 19:58:06.378034 systemd[1543]: Reached target paths.target - Paths. Oct 8 19:58:06.378046 systemd[1543]: Reached target timers.target - Timers. Oct 8 19:58:06.379747 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:58:06.390361 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:58:06.390426 systemd[1543]: Reached target sockets.target - Sockets. Oct 8 19:58:06.390439 systemd[1543]: Reached target basic.target - Basic System. Oct 8 19:58:06.390477 systemd[1543]: Reached target default.target - Main User Target. Oct 8 19:58:06.390506 systemd[1543]: Startup finished in 92ms. Oct 8 19:58:06.391030 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:58:06.392511 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:58:06.456315 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:42910.service - OpenSSH per-connection server daemon (10.0.0.1:42910). Oct 8 19:58:06.492169 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 42910 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:58:06.493577 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:06.498080 systemd-logind[1426]: New session 2 of user core. Oct 8 19:58:06.507062 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:58:06.560219 sshd[1554]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:06.572445 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:42910.service: Deactivated successfully. Oct 8 19:58:06.574056 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:58:06.576452 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:58:06.577086 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:42922.service - OpenSSH per-connection server daemon (10.0.0.1:42922). Oct 8 19:58:06.578255 systemd-logind[1426]: Removed session 2. Oct 8 19:58:06.616116 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 42922 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:58:06.615503 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:06.620936 systemd-logind[1426]: New session 3 of user core. Oct 8 19:58:06.627035 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:58:06.680232 sshd[1561]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:06.696204 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:42922.service: Deactivated successfully. Oct 8 19:58:06.697560 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:58:06.699456 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:58:06.700493 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:42924.service - OpenSSH per-connection server daemon (10.0.0.1:42924). Oct 8 19:58:06.705572 systemd-logind[1426]: Removed session 3. Oct 8 19:58:06.746769 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 42924 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:58:06.748101 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:06.752829 systemd-logind[1426]: New session 4 of user core. Oct 8 19:58:06.762049 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:58:06.820067 sshd[1568]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:06.834238 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:42924.service: Deactivated successfully. Oct 8 19:58:06.836073 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:58:06.837370 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:58:06.838739 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:42934.service - OpenSSH per-connection server daemon (10.0.0.1:42934). Oct 8 19:58:06.839407 systemd-logind[1426]: Removed session 4. Oct 8 19:58:06.875386 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 42934 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:58:06.876698 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:06.881450 systemd-logind[1426]: New session 5 of user core. Oct 8 19:58:06.894985 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:58:06.961184 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:58:06.961442 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:58:06.978823 sudo[1578]: pam_unix(sudo:session): session closed for user root Oct 8 19:58:06.980543 sshd[1575]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:06.990342 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:42934.service: Deactivated successfully. Oct 8 19:58:06.991790 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:58:06.993811 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:58:07.003279 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:42936.service - OpenSSH per-connection server daemon (10.0.0.1:42936). Oct 8 19:58:07.004526 systemd-logind[1426]: Removed session 5. Oct 8 19:58:07.035942 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 42936 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:58:07.037212 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:07.041591 systemd-logind[1426]: New session 6 of user core. Oct 8 19:58:07.049014 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:58:07.100341 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:58:07.100632 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:58:07.105614 sudo[1587]: pam_unix(sudo:session): session closed for user root Oct 8 19:58:07.110352 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:58:07.110629 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:58:07.130080 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:58:07.131256 auditctl[1590]: No rules Oct 8 19:58:07.131535 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:58:07.131697 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:58:07.133764 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:58:07.157123 augenrules[1608]: No rules Oct 8 19:58:07.160001 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:58:07.161200 sudo[1586]: pam_unix(sudo:session): session closed for user root Oct 8 19:58:07.162932 sshd[1583]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:07.172247 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:42936.service: Deactivated successfully. Oct 8 19:58:07.173555 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:58:07.176697 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:58:07.178385 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:42944.service - OpenSSH per-connection server daemon (10.0.0.1:42944). Oct 8 19:58:07.179397 systemd-logind[1426]: Removed session 6. Oct 8 19:58:07.216640 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 42944 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:58:07.218106 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:07.224128 systemd-logind[1426]: New session 7 of user core. Oct 8 19:58:07.238059 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:58:07.289576 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:58:07.289842 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:58:07.595165 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:58:07.595393 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:58:07.868636 dockerd[1638]: time="2024-10-08T19:58:07.868505338Z" level=info msg="Starting up" Oct 8 19:58:07.997109 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3384340690-merged.mount: Deactivated successfully. Oct 8 19:58:08.021828 dockerd[1638]: time="2024-10-08T19:58:08.021740573Z" level=info msg="Loading containers: start." Oct 8 19:58:08.133906 kernel: Initializing XFRM netlink socket Oct 8 19:58:08.210938 systemd-networkd[1381]: docker0: Link UP Oct 8 19:58:08.229167 dockerd[1638]: time="2024-10-08T19:58:08.229113458Z" level=info msg="Loading containers: done." Oct 8 19:58:08.243559 dockerd[1638]: time="2024-10-08T19:58:08.243412813Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:58:08.243559 dockerd[1638]: time="2024-10-08T19:58:08.243527549Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:58:08.243723 dockerd[1638]: time="2024-10-08T19:58:08.243625872Z" level=info msg="Daemon has completed initialization" Oct 8 19:58:08.286489 dockerd[1638]: time="2024-10-08T19:58:08.285886002Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:58:08.286642 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:58:08.670010 containerd[1439]: time="2024-10-08T19:58:08.669488917Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 19:58:08.995374 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1500129380-merged.mount: Deactivated successfully. Oct 8 19:58:09.406423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33645938.mount: Deactivated successfully. Oct 8 19:58:10.191226 containerd[1439]: time="2024-10-08T19:58:10.191158645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:10.192031 containerd[1439]: time="2024-10-08T19:58:10.191789705Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=25691523" Oct 8 19:58:10.192694 containerd[1439]: time="2024-10-08T19:58:10.192654761Z" level=info msg="ImageCreate event name:\"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:10.196250 containerd[1439]: time="2024-10-08T19:58:10.196211742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:10.197985 containerd[1439]: time="2024-10-08T19:58:10.197945222Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"25688321\" in 1.528412324s" Oct 8 19:58:10.197985 containerd[1439]: time="2024-10-08T19:58:10.197978037Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\"" Oct 8 19:58:10.198699 containerd[1439]: time="2024-10-08T19:58:10.198671838Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 19:58:11.251397 containerd[1439]: time="2024-10-08T19:58:11.251355310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:11.253322 containerd[1439]: time="2024-10-08T19:58:11.253228056Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=22460088" Oct 8 19:58:11.254031 containerd[1439]: time="2024-10-08T19:58:11.254007163Z" level=info msg="ImageCreate event name:\"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:11.257950 containerd[1439]: time="2024-10-08T19:58:11.257377680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:11.258956 containerd[1439]: time="2024-10-08T19:58:11.258928756Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"23947353\" in 1.060220535s" Oct 8 19:58:11.259495 containerd[1439]: time="2024-10-08T19:58:11.259474552Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\"" Oct 8 19:58:11.260057 containerd[1439]: time="2024-10-08T19:58:11.260033785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 19:58:12.394236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:58:12.404038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:12.499738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:12.503281 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:58:12.535874 kubelet[1852]: E1008 19:58:12.535811 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:58:12.539000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:58:12.539154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:58:13.919113 containerd[1439]: time="2024-10-08T19:58:13.919062670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:13.919978 containerd[1439]: time="2024-10-08T19:58:13.919948577Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=17018560" Oct 8 19:58:13.920654 containerd[1439]: time="2024-10-08T19:58:13.920619813Z" level=info msg="ImageCreate event name:\"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:13.923932 containerd[1439]: time="2024-10-08T19:58:13.923898009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:13.925215 containerd[1439]: time="2024-10-08T19:58:13.925169513Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"18505843\" in 2.665104086s" Oct 8 19:58:13.925215 containerd[1439]: time="2024-10-08T19:58:13.925207202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\"" Oct 8 19:58:13.925908 containerd[1439]: time="2024-10-08T19:58:13.925708996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 19:58:14.923967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243954470.mount: Deactivated successfully. Oct 8 19:58:15.125690 containerd[1439]: time="2024-10-08T19:58:15.125637478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:15.126161 containerd[1439]: time="2024-10-08T19:58:15.126115878Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=26753317" Oct 8 19:58:15.126768 containerd[1439]: time="2024-10-08T19:58:15.126738620Z" level=info msg="ImageCreate event name:\"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:15.128612 containerd[1439]: time="2024-10-08T19:58:15.128576022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:15.129351 containerd[1439]: time="2024-10-08T19:58:15.129316610Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"26752334\" in 1.203576664s" Oct 8 19:58:15.129386 containerd[1439]: time="2024-10-08T19:58:15.129360582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\"" Oct 8 19:58:15.129807 containerd[1439]: time="2024-10-08T19:58:15.129785471Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:58:15.626979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186222779.mount: Deactivated successfully. Oct 8 19:58:16.307193 containerd[1439]: time="2024-10-08T19:58:16.307144443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:16.307666 containerd[1439]: time="2024-10-08T19:58:16.307552803Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 19:58:16.308840 containerd[1439]: time="2024-10-08T19:58:16.308789628Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:16.312609 containerd[1439]: time="2024-10-08T19:58:16.312569598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:16.313748 containerd[1439]: time="2024-10-08T19:58:16.313657651Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.18383783s" Oct 8 19:58:16.313748 containerd[1439]: time="2024-10-08T19:58:16.313694403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:58:16.314483 containerd[1439]: time="2024-10-08T19:58:16.314336582Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 19:58:16.745527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103979960.mount: Deactivated successfully. Oct 8 19:58:16.753824 containerd[1439]: time="2024-10-08T19:58:16.753779567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:16.754563 containerd[1439]: time="2024-10-08T19:58:16.754484149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 8 19:58:16.755205 containerd[1439]: time="2024-10-08T19:58:16.755139553Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:16.757275 containerd[1439]: time="2024-10-08T19:58:16.757216906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:16.758464 containerd[1439]: time="2024-10-08T19:58:16.758420626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 444.05114ms" Oct 8 19:58:16.758464 containerd[1439]: time="2024-10-08T19:58:16.758461666Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 8 19:58:16.758973 containerd[1439]: time="2024-10-08T19:58:16.758948420Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 19:58:17.291966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4148729870.mount: Deactivated successfully. Oct 8 19:58:19.976281 containerd[1439]: time="2024-10-08T19:58:19.976226847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:19.976837 containerd[1439]: time="2024-10-08T19:58:19.976800494Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=65868194" Oct 8 19:58:19.977723 containerd[1439]: time="2024-10-08T19:58:19.977689050Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:19.981241 containerd[1439]: time="2024-10-08T19:58:19.981171117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:19.983689 containerd[1439]: time="2024-10-08T19:58:19.983083847Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.224025534s" Oct 8 19:58:19.983689 containerd[1439]: time="2024-10-08T19:58:19.983125314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Oct 8 19:58:22.789498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:58:22.800043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:22.894950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:22.898597 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:58:22.934056 kubelet[2004]: E1008 19:58:22.934018 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:58:22.936575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:58:22.936710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:58:24.249286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:24.259206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:24.280804 systemd[1]: Reloading requested from client PID 2019 ('systemctl') (unit session-7.scope)... Oct 8 19:58:24.280824 systemd[1]: Reloading... Oct 8 19:58:24.340914 zram_generator::config[2058]: No configuration found. Oct 8 19:58:24.425968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:58:24.478283 systemd[1]: Reloading finished in 197 ms. Oct 8 19:58:24.514251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:24.516859 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:58:24.517072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:24.518507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:24.621062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:24.624672 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:58:24.663226 kubelet[2105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:58:24.663226 kubelet[2105]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:58:24.663226 kubelet[2105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:58:24.663568 kubelet[2105]: I1008 19:58:24.663401 2105 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:58:25.315489 kubelet[2105]: I1008 19:58:25.315446 2105 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:58:25.315489 kubelet[2105]: I1008 19:58:25.315478 2105 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:58:25.315738 kubelet[2105]: I1008 19:58:25.315714 2105 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:58:25.369505 kubelet[2105]: E1008 19:58:25.369446 2105 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:25.370376 kubelet[2105]: I1008 19:58:25.370356 2105 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:58:25.377533 kubelet[2105]: E1008 19:58:25.377499 2105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:58:25.377533 kubelet[2105]: I1008 19:58:25.377532 2105 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:58:25.380911 kubelet[2105]: I1008 19:58:25.380892 2105 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:58:25.381826 kubelet[2105]: I1008 19:58:25.381753 2105 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:58:25.381930 kubelet[2105]: I1008 19:58:25.381906 2105 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:58:25.382108 kubelet[2105]: I1008 19:58:25.381931 2105 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:58:25.382269 kubelet[2105]: I1008 19:58:25.382250 2105 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:58:25.382269 kubelet[2105]: I1008 19:58:25.382264 2105 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:58:25.382470 kubelet[2105]: I1008 19:58:25.382451 2105 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:58:25.384159 kubelet[2105]: I1008 19:58:25.384133 2105 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:58:25.384202 kubelet[2105]: I1008 19:58:25.384162 2105 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:58:25.384271 kubelet[2105]: I1008 19:58:25.384262 2105 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:58:25.384298 kubelet[2105]: I1008 19:58:25.384274 2105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:58:25.386428 kubelet[2105]: I1008 19:58:25.386383 2105 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:58:25.387338 kubelet[2105]: W1008 19:58:25.387288 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Oct 8 19:58:25.387379 kubelet[2105]: E1008 19:58:25.387351 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:25.388857 kubelet[2105]: W1008 19:58:25.388779 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Oct 8 19:58:25.388857 kubelet[2105]: E1008 19:58:25.388827 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:25.390499 kubelet[2105]: I1008 19:58:25.390481 2105 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:58:25.391452 kubelet[2105]: W1008 19:58:25.391428 2105 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:58:25.392383 kubelet[2105]: I1008 19:58:25.392122 2105 server.go:1269] "Started kubelet" Oct 8 19:58:25.393203 kubelet[2105]: I1008 19:58:25.392763 2105 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:58:25.393203 kubelet[2105]: I1008 19:58:25.392786 2105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:58:25.393556 kubelet[2105]: I1008 19:58:25.393532 2105 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:58:25.395557 kubelet[2105]: I1008 19:58:25.395501 2105 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:58:25.399177 kubelet[2105]: I1008 19:58:25.396735 2105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:58:25.399177 kubelet[2105]: I1008 19:58:25.397144 2105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:58:25.399177 kubelet[2105]: I1008 19:58:25.398614 2105 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:58:25.399177 kubelet[2105]: I1008 19:58:25.398729 2105 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:58:25.399177 kubelet[2105]: I1008 19:58:25.398782 2105 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:58:25.399177 kubelet[2105]: E1008 19:58:25.398814 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:58:25.399177 kubelet[2105]: W1008 19:58:25.399156 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Oct 8 19:58:25.399374 kubelet[2105]: E1008 19:58:25.399203 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:25.399374 kubelet[2105]: E1008 19:58:25.399315 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Oct 8 19:58:25.400625 kubelet[2105]: E1008 19:58:25.399635 2105 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc929048766c0b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:58:25.392094219 +0000 UTC m=+0.764537938,LastTimestamp:2024-10-08 19:58:25.392094219 +0000 UTC m=+0.764537938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:58:25.400772 kubelet[2105]: I1008 19:58:25.400745 2105 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:58:25.401101 kubelet[2105]: I1008 19:58:25.400856 2105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:58:25.401240 kubelet[2105]: E1008 19:58:25.401212 2105 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:58:25.403969 kubelet[2105]: I1008 19:58:25.402438 2105 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:58:25.417238 kubelet[2105]: I1008 19:58:25.417197 2105 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:58:25.417238 kubelet[2105]: I1008 19:58:25.417218 2105 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:58:25.417238 kubelet[2105]: I1008 19:58:25.417237 2105 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:58:25.419357 kubelet[2105]: I1008 19:58:25.419298 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:58:25.420384 kubelet[2105]: I1008 19:58:25.420354 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:58:25.420384 kubelet[2105]: I1008 19:58:25.420378 2105 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:58:25.420515 kubelet[2105]: I1008 19:58:25.420398 2105 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:58:25.420515 kubelet[2105]: E1008 19:58:25.420440 2105 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:58:25.421404 kubelet[2105]: W1008 19:58:25.421343 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Oct 8 19:58:25.421479 kubelet[2105]: E1008 19:58:25.421401 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:25.499945 kubelet[2105]: E1008 19:58:25.499901 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:58:25.521196 kubelet[2105]: E1008 19:58:25.521137 2105 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:58:25.525883 kubelet[2105]: I1008 19:58:25.525840 2105 policy_none.go:49] "None policy: Start" Oct 8 19:58:25.526704 kubelet[2105]: I1008 19:58:25.526646 2105 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:58:25.526704 kubelet[2105]: I1008 19:58:25.526668 2105 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:58:25.532645 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:58:25.550286 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:58:25.552740 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:58:25.568835 kubelet[2105]: I1008 19:58:25.568607 2105 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:58:25.568835 kubelet[2105]: I1008 19:58:25.568816 2105 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:58:25.568940 kubelet[2105]: I1008 19:58:25.568828 2105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:58:25.569470 kubelet[2105]: I1008 19:58:25.569081 2105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:58:25.570633 kubelet[2105]: E1008 19:58:25.570590 2105 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:58:25.600158 kubelet[2105]: E1008 19:58:25.600115 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Oct 8 19:58:25.670131 kubelet[2105]: I1008 19:58:25.670095 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:58:25.670498 kubelet[2105]: E1008 19:58:25.670465 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 8 19:58:25.729208 systemd[1]: Created slice kubepods-burstable-pod8f5beb94cfba60389a43b96439e80993.slice - libcontainer container kubepods-burstable-pod8f5beb94cfba60389a43b96439e80993.slice. Oct 8 19:58:25.754408 systemd[1]: Created slice kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice - libcontainer container kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice. Oct 8 19:58:25.766983 systemd[1]: Created slice kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice - libcontainer container kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice. Oct 8 19:58:25.801825 kubelet[2105]: I1008 19:58:25.801781 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f5beb94cfba60389a43b96439e80993-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f5beb94cfba60389a43b96439e80993\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:58:25.801825 kubelet[2105]: I1008 19:58:25.801818 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:25.801825 kubelet[2105]: I1008 19:58:25.801837 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:25.802005 kubelet[2105]: I1008 19:58:25.801855 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:25.802005 kubelet[2105]: I1008 19:58:25.801886 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:25.802005 kubelet[2105]: I1008 19:58:25.801902 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f5beb94cfba60389a43b96439e80993-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f5beb94cfba60389a43b96439e80993\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:58:25.802005 kubelet[2105]: I1008 19:58:25.801918 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f5beb94cfba60389a43b96439e80993-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f5beb94cfba60389a43b96439e80993\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:58:25.802005 kubelet[2105]: I1008 19:58:25.801932 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:25.802106 kubelet[2105]: I1008 19:58:25.801947 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:58:25.872287 kubelet[2105]: I1008 19:58:25.872194 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:58:25.872577 kubelet[2105]: E1008 19:58:25.872534 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 8 19:58:26.000935 kubelet[2105]: E1008 19:58:26.000850 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Oct 8 19:58:26.053426 kubelet[2105]: E1008 19:58:26.053374 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:26.054093 containerd[1439]: time="2024-10-08T19:58:26.054043343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f5beb94cfba60389a43b96439e80993,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:26.065910 kubelet[2105]: E1008 19:58:26.065742 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:26.068284 containerd[1439]: time="2024-10-08T19:58:26.068252046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:26.069549 kubelet[2105]: E1008 19:58:26.069519 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:26.069889 containerd[1439]: time="2024-10-08T19:58:26.069844044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:26.274269 kubelet[2105]: I1008 19:58:26.274162 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:58:26.274651 kubelet[2105]: E1008 19:58:26.274604 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Oct 8 19:58:26.333343 kubelet[2105]: W1008 19:58:26.333301 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Oct 8 19:58:26.333393 kubelet[2105]: E1008 19:58:26.333349 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:26.451590 kubelet[2105]: W1008 19:58:26.451512 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Oct 8 19:58:26.451590 kubelet[2105]: E1008 19:58:26.451589 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:26.517420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334997321.mount: Deactivated successfully. Oct 8 19:58:26.521754 containerd[1439]: time="2024-10-08T19:58:26.521705490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:26.522813 containerd[1439]: time="2024-10-08T19:58:26.522774870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:58:26.523387 containerd[1439]: time="2024-10-08T19:58:26.523354226Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:26.524119 containerd[1439]: time="2024-10-08T19:58:26.524089183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:26.524766 containerd[1439]: time="2024-10-08T19:58:26.524694846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:58:26.525376 containerd[1439]: time="2024-10-08T19:58:26.525348238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 19:58:26.525910 containerd[1439]: time="2024-10-08T19:58:26.525880746Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:26.529308 containerd[1439]: time="2024-10-08T19:58:26.529265309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:26.530152 containerd[1439]: time="2024-10-08T19:58:26.530121350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 460.200508ms" Oct 8 19:58:26.533534 containerd[1439]: time="2024-10-08T19:58:26.533413018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 479.288711ms" Oct 8 19:58:26.534361 containerd[1439]: time="2024-10-08T19:58:26.534334766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 466.011488ms" Oct 8 19:58:26.648523 containerd[1439]: time="2024-10-08T19:58:26.648209153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:26.648523 containerd[1439]: time="2024-10-08T19:58:26.648298926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:26.648523 containerd[1439]: time="2024-10-08T19:58:26.648311098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:26.648523 containerd[1439]: time="2024-10-08T19:58:26.648440872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:26.648879 containerd[1439]: time="2024-10-08T19:58:26.648467579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:26.648879 containerd[1439]: time="2024-10-08T19:58:26.648781062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:26.648879 containerd[1439]: time="2024-10-08T19:58:26.648801323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:26.649117 containerd[1439]: time="2024-10-08T19:58:26.648908633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:26.649704 containerd[1439]: time="2024-10-08T19:58:26.649618083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:26.649704 containerd[1439]: time="2024-10-08T19:58:26.649682069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:26.649820 containerd[1439]: time="2024-10-08T19:58:26.649701009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:26.649820 containerd[1439]: time="2024-10-08T19:58:26.649798989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:26.650299 kubelet[2105]: W1008 19:58:26.650241 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Oct 8 19:58:26.650383 kubelet[2105]: E1008 19:58:26.650313 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:26.674057 systemd[1]: Started cri-containerd-89e09d5d22453717f139789dd01a5c989f8d92ad4bde9c5e41fa6aae157d185c.scope - libcontainer container 89e09d5d22453717f139789dd01a5c989f8d92ad4bde9c5e41fa6aae157d185c. Oct 8 19:58:26.675333 systemd[1]: Started cri-containerd-cfb3137700a2537e05ac95dcc46804e1cf3f71475b6d6e49394c4fe192960b92.scope - libcontainer container cfb3137700a2537e05ac95dcc46804e1cf3f71475b6d6e49394c4fe192960b92. Oct 8 19:58:26.676351 systemd[1]: Started cri-containerd-e0fa9c4da30a98c26f24128f5a0f221303eb0c052e3d697b05cd3ebbcedd4c2b.scope - libcontainer container e0fa9c4da30a98c26f24128f5a0f221303eb0c052e3d697b05cd3ebbcedd4c2b. Oct 8 19:58:26.708583 containerd[1439]: time="2024-10-08T19:58:26.708460718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"89e09d5d22453717f139789dd01a5c989f8d92ad4bde9c5e41fa6aae157d185c\"" Oct 8 19:58:26.712563 kubelet[2105]: E1008 19:58:26.712531 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:26.713135 containerd[1439]: time="2024-10-08T19:58:26.712816160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfb3137700a2537e05ac95dcc46804e1cf3f71475b6d6e49394c4fe192960b92\"" Oct 8 19:58:26.713666 containerd[1439]: time="2024-10-08T19:58:26.713607294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f5beb94cfba60389a43b96439e80993,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0fa9c4da30a98c26f24128f5a0f221303eb0c052e3d697b05cd3ebbcedd4c2b\"" Oct 8 19:58:26.714590 kubelet[2105]: E1008 19:58:26.713970 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:26.714964 kubelet[2105]: E1008 19:58:26.714935 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:26.715847 containerd[1439]: time="2024-10-08T19:58:26.715813364Z" level=info msg="CreateContainer within sandbox \"89e09d5d22453717f139789dd01a5c989f8d92ad4bde9c5e41fa6aae157d185c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:58:26.716332 containerd[1439]: time="2024-10-08T19:58:26.716302267Z" level=info msg="CreateContainer within sandbox \"cfb3137700a2537e05ac95dcc46804e1cf3f71475b6d6e49394c4fe192960b92\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:58:26.717375 containerd[1439]: time="2024-10-08T19:58:26.717347102Z" level=info msg="CreateContainer within sandbox \"e0fa9c4da30a98c26f24128f5a0f221303eb0c052e3d697b05cd3ebbcedd4c2b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:58:26.732586 containerd[1439]: time="2024-10-08T19:58:26.732540898Z" level=info msg="CreateContainer within sandbox \"cfb3137700a2537e05ac95dcc46804e1cf3f71475b6d6e49394c4fe192960b92\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"acb68ee11f3a80db46366fb831227301efbb887ed31d3731e282ba5bf2a0dba1\"" Oct 8 19:58:26.733157 containerd[1439]: time="2024-10-08T19:58:26.733119013Z" level=info msg="StartContainer for \"acb68ee11f3a80db46366fb831227301efbb887ed31d3731e282ba5bf2a0dba1\"" Oct 8 19:58:26.737714 containerd[1439]: time="2024-10-08T19:58:26.737677144Z" level=info msg="CreateContainer within sandbox \"e0fa9c4da30a98c26f24128f5a0f221303eb0c052e3d697b05cd3ebbcedd4c2b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9bda6135da482724d48a61536229f9ff345daac91fc25b5cfc97675fcc863f5e\"" Oct 8 19:58:26.738332 containerd[1439]: time="2024-10-08T19:58:26.738305711Z" level=info msg="StartContainer for \"9bda6135da482724d48a61536229f9ff345daac91fc25b5cfc97675fcc863f5e\"" Oct 8 19:58:26.738668 containerd[1439]: time="2024-10-08T19:58:26.738538711Z" level=info msg="CreateContainer within sandbox \"89e09d5d22453717f139789dd01a5c989f8d92ad4bde9c5e41fa6aae157d185c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9556136d255b6b92f6c1bd5af3368d60edd3195332e3fd4fd6f5844502d60abc\"" Oct 8 19:58:26.739094 containerd[1439]: time="2024-10-08T19:58:26.738879101Z" level=info msg="StartContainer for \"9556136d255b6b92f6c1bd5af3368d60edd3195332e3fd4fd6f5844502d60abc\"" Oct 8 19:58:26.766021 systemd[1]: Started cri-containerd-acb68ee11f3a80db46366fb831227301efbb887ed31d3731e282ba5bf2a0dba1.scope - libcontainer container acb68ee11f3a80db46366fb831227301efbb887ed31d3731e282ba5bf2a0dba1. Oct 8 19:58:26.769532 systemd[1]: Started cri-containerd-9556136d255b6b92f6c1bd5af3368d60edd3195332e3fd4fd6f5844502d60abc.scope - libcontainer container 9556136d255b6b92f6c1bd5af3368d60edd3195332e3fd4fd6f5844502d60abc. Oct 8 19:58:26.770607 systemd[1]: Started cri-containerd-9bda6135da482724d48a61536229f9ff345daac91fc25b5cfc97675fcc863f5e.scope - libcontainer container 9bda6135da482724d48a61536229f9ff345daac91fc25b5cfc97675fcc863f5e. Oct 8 19:58:26.802332 kubelet[2105]: E1008 19:58:26.802221 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Oct 8 19:58:26.810250 containerd[1439]: time="2024-10-08T19:58:26.810211949Z" level=info msg="StartContainer for \"9556136d255b6b92f6c1bd5af3368d60edd3195332e3fd4fd6f5844502d60abc\" returns successfully" Oct 8 19:58:26.811209 containerd[1439]: time="2024-10-08T19:58:26.810425168Z" level=info msg="StartContainer for \"9bda6135da482724d48a61536229f9ff345daac91fc25b5cfc97675fcc863f5e\" returns successfully" Oct 8 19:58:26.848512 containerd[1439]: time="2024-10-08T19:58:26.843358219Z" level=info msg="StartContainer for \"acb68ee11f3a80db46366fb831227301efbb887ed31d3731e282ba5bf2a0dba1\" returns successfully" Oct 8 19:58:26.848594 kubelet[2105]: W1008 19:58:26.844222 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Oct 8 19:58:26.848594 kubelet[2105]: E1008 19:58:26.844283 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:58:27.079996 kubelet[2105]: I1008 19:58:27.076492 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:58:27.427544 kubelet[2105]: E1008 19:58:27.427435 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:27.429480 kubelet[2105]: E1008 19:58:27.429455 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:27.431124 kubelet[2105]: E1008 19:58:27.431101 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:28.327305 kubelet[2105]: I1008 19:58:28.327262 2105 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 8 19:58:28.327305 kubelet[2105]: E1008 19:58:28.327303 2105 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 8 19:58:28.338834 kubelet[2105]: E1008 19:58:28.338801 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:58:28.433411 kubelet[2105]: E1008 19:58:28.433374 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:28.433784 kubelet[2105]: E1008 19:58:28.433764 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:28.439003 kubelet[2105]: E1008 19:58:28.438984 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:58:28.459499 kubelet[2105]: E1008 19:58:28.459468 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Oct 8 19:58:28.539913 kubelet[2105]: E1008 19:58:28.539855 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:58:28.641128 kubelet[2105]: E1008 19:58:28.640669 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:58:29.386694 kubelet[2105]: I1008 19:58:29.386461 2105 apiserver.go:52] "Watching apiserver" Oct 8 19:58:29.399779 kubelet[2105]: I1008 19:58:29.399707 2105 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:58:30.592227 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-7.scope)... Oct 8 19:58:30.592246 systemd[1]: Reloading... Oct 8 19:58:30.668042 zram_generator::config[2427]: No configuration found. Oct 8 19:58:30.752805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:58:30.756646 kubelet[2105]: E1008 19:58:30.756595 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:30.819497 systemd[1]: Reloading finished in 226 ms. Oct 8 19:58:30.851665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:30.867614 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:58:30.868919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:30.868977 systemd[1]: kubelet.service: Consumed 1.133s CPU time, 116.1M memory peak, 0B memory swap peak. Oct 8 19:58:30.879091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:30.975979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:30.980996 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:58:31.016446 kubelet[2466]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:58:31.016446 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:58:31.016446 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:58:31.016827 kubelet[2466]: I1008 19:58:31.016486 2466 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:58:31.023457 kubelet[2466]: I1008 19:58:31.023374 2466 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:58:31.023457 kubelet[2466]: I1008 19:58:31.023454 2466 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:58:31.023687 kubelet[2466]: I1008 19:58:31.023671 2466 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:58:31.025899 kubelet[2466]: I1008 19:58:31.025844 2466 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:58:31.028064 kubelet[2466]: I1008 19:58:31.027998 2466 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:58:31.035161 kubelet[2466]: E1008 19:58:31.035096 2466 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:58:31.035161 kubelet[2466]: I1008 19:58:31.035128 2466 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:58:31.038025 kubelet[2466]: I1008 19:58:31.037791 2466 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:58:31.038113 kubelet[2466]: I1008 19:58:31.038039 2466 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:58:31.038513 kubelet[2466]: I1008 19:58:31.038142 2466 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:58:31.038584 kubelet[2466]: I1008 19:58:31.038179 2466 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:58:31.038659 kubelet[2466]: I1008 19:58:31.038597 2466 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:58:31.038659 kubelet[2466]: I1008 19:58:31.038612 2466 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:58:31.038659 kubelet[2466]: I1008 19:58:31.038647 2466 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:58:31.038877 kubelet[2466]: I1008 19:58:31.038755 2466 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:58:31.038877 kubelet[2466]: I1008 19:58:31.038775 2466 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:58:31.038877 kubelet[2466]: I1008 19:58:31.038799 2466 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:58:31.038877 kubelet[2466]: I1008 19:58:31.038808 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:58:31.042885 kubelet[2466]: I1008 19:58:31.041220 2466 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:58:31.042885 kubelet[2466]: I1008 19:58:31.041732 2466 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:58:31.042885 kubelet[2466]: I1008 19:58:31.042164 2466 server.go:1269] "Started kubelet" Oct 8 19:58:31.043206 kubelet[2466]: I1008 19:58:31.043072 2466 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:58:31.047925 kubelet[2466]: I1008 19:58:31.044712 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:58:31.047925 kubelet[2466]: I1008 19:58:31.044810 2466 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:58:31.053237 kubelet[2466]: I1008 19:58:31.053177 2466 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:58:31.053433 kubelet[2466]: I1008 19:58:31.053414 2466 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:58:31.056773 kubelet[2466]: I1008 19:58:31.056690 2466 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:58:31.058031 kubelet[2466]: I1008 19:58:31.057999 2466 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:58:31.058191 kubelet[2466]: E1008 19:58:31.058164 2466 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:58:31.058712 kubelet[2466]: I1008 19:58:31.058398 2466 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:58:31.058913 kubelet[2466]: I1008 19:58:31.058847 2466 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:58:31.059160 kubelet[2466]: I1008 19:58:31.059107 2466 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:58:31.059380 kubelet[2466]: I1008 19:58:31.059249 2466 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:58:31.061258 kubelet[2466]: E1008 19:58:31.061229 2466 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:58:31.061362 kubelet[2466]: I1008 19:58:31.061257 2466 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:58:31.078789 kubelet[2466]: I1008 19:58:31.078220 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:58:31.080921 kubelet[2466]: I1008 19:58:31.080525 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:58:31.080921 kubelet[2466]: I1008 19:58:31.080588 2466 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:58:31.080921 kubelet[2466]: I1008 19:58:31.080606 2466 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:58:31.080921 kubelet[2466]: E1008 19:58:31.080658 2466 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:58:31.100458 kubelet[2466]: I1008 19:58:31.100432 2466 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:58:31.100458 kubelet[2466]: I1008 19:58:31.100447 2466 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:58:31.100458 kubelet[2466]: I1008 19:58:31.100467 2466 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:58:31.100659 kubelet[2466]: I1008 19:58:31.100613 2466 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:58:31.100659 kubelet[2466]: I1008 19:58:31.100624 2466 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:58:31.100659 kubelet[2466]: I1008 19:58:31.100641 2466 policy_none.go:49] "None policy: Start" Oct 8 19:58:31.101109 kubelet[2466]: I1008 19:58:31.101094 2466 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:58:31.101173 kubelet[2466]: I1008 19:58:31.101116 2466 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:58:31.101293 kubelet[2466]: I1008 19:58:31.101272 2466 state_mem.go:75] "Updated machine memory state" Oct 8 19:58:31.105509 kubelet[2466]: I1008 19:58:31.105426 2466 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:58:31.107473 kubelet[2466]: I1008 19:58:31.107155 2466 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:58:31.107473 kubelet[2466]: I1008 19:58:31.107176 2466 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:58:31.107473 kubelet[2466]: I1008 19:58:31.107418 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:58:31.191633 kubelet[2466]: E1008 19:58:31.191587 2466 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:31.212089 kubelet[2466]: I1008 19:58:31.212054 2466 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:58:31.219991 kubelet[2466]: I1008 19:58:31.219959 2466 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Oct 8 19:58:31.220101 kubelet[2466]: I1008 19:58:31.220043 2466 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 8 19:58:31.360848 kubelet[2466]: I1008 19:58:31.360726 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f5beb94cfba60389a43b96439e80993-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f5beb94cfba60389a43b96439e80993\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:58:31.360848 kubelet[2466]: I1008 19:58:31.360772 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:31.360848 kubelet[2466]: I1008 19:58:31.360796 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:31.360848 kubelet[2466]: I1008 19:58:31.360813 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:58:31.360848 kubelet[2466]: I1008 19:58:31.360830 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f5beb94cfba60389a43b96439e80993-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f5beb94cfba60389a43b96439e80993\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:58:31.361060 kubelet[2466]: I1008 19:58:31.360844 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f5beb94cfba60389a43b96439e80993-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f5beb94cfba60389a43b96439e80993\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:58:31.361060 kubelet[2466]: I1008 19:58:31.360860 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:31.361060 kubelet[2466]: I1008 19:58:31.360894 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:31.361060 kubelet[2466]: I1008 19:58:31.360909 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:58:31.491553 kubelet[2466]: E1008 19:58:31.491480 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:31.491553 kubelet[2466]: E1008 19:58:31.491501 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:31.492059 kubelet[2466]: E1008 19:58:31.492028 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:32.040411 kubelet[2466]: I1008 19:58:32.040368 2466 apiserver.go:52] "Watching apiserver" Oct 8 19:58:32.059603 kubelet[2466]: I1008 19:58:32.059560 2466 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:58:32.091243 kubelet[2466]: E1008 19:58:32.091193 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:32.091243 kubelet[2466]: E1008 19:58:32.091251 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:32.104340 kubelet[2466]: E1008 19:58:32.104107 2466 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:58:32.104809 kubelet[2466]: E1008 19:58:32.104776 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:32.121328 kubelet[2466]: I1008 19:58:32.121237 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.121224099 podStartE2EDuration="2.121224099s" podCreationTimestamp="2024-10-08 19:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:32.121209409 +0000 UTC m=+1.137012359" watchObservedRunningTime="2024-10-08 19:58:32.121224099 +0000 UTC m=+1.137027009" Oct 8 19:58:32.142154 kubelet[2466]: I1008 19:58:32.142029 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.142012429 podStartE2EDuration="1.142012429s" podCreationTimestamp="2024-10-08 19:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:32.130400393 +0000 UTC m=+1.146203343" watchObservedRunningTime="2024-10-08 19:58:32.142012429 +0000 UTC m=+1.157815339" Oct 8 19:58:32.155915 kubelet[2466]: I1008 19:58:32.155769 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.155753392 podStartE2EDuration="1.155753392s" podCreationTimestamp="2024-10-08 19:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:32.142670088 +0000 UTC m=+1.158473038" watchObservedRunningTime="2024-10-08 19:58:32.155753392 +0000 UTC m=+1.171556302" Oct 8 19:58:33.094260 kubelet[2466]: E1008 19:58:33.094219 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:34.344539 kubelet[2466]: E1008 19:58:34.344498 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:35.758376 sudo[1619]: pam_unix(sudo:session): session closed for user root Oct 8 19:58:35.760100 sshd[1616]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:35.763471 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:42944.service: Deactivated successfully. Oct 8 19:58:35.765533 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:58:35.766976 systemd[1]: session-7.scope: Consumed 6.318s CPU time, 152.2M memory peak, 0B memory swap peak. Oct 8 19:58:35.767860 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:58:35.769623 systemd-logind[1426]: Removed session 7. Oct 8 19:58:35.966040 kubelet[2466]: I1008 19:58:35.966012 2466 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:58:35.968516 containerd[1439]: time="2024-10-08T19:58:35.968399525Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:58:35.969125 kubelet[2466]: I1008 19:58:35.968762 2466 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:58:36.220028 kubelet[2466]: E1008 19:58:36.219982 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:36.679325 systemd[1]: Created slice kubepods-besteffort-pod391bac30_bce2_411a_8001_b7d0651190a2.slice - libcontainer container kubepods-besteffort-pod391bac30_bce2_411a_8001_b7d0651190a2.slice. Oct 8 19:58:36.698354 kubelet[2466]: I1008 19:58:36.698164 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/391bac30-bce2-411a-8001-b7d0651190a2-kube-proxy\") pod \"kube-proxy-tdqdj\" (UID: \"391bac30-bce2-411a-8001-b7d0651190a2\") " pod="kube-system/kube-proxy-tdqdj" Oct 8 19:58:36.698354 kubelet[2466]: I1008 19:58:36.698220 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/391bac30-bce2-411a-8001-b7d0651190a2-xtables-lock\") pod \"kube-proxy-tdqdj\" (UID: \"391bac30-bce2-411a-8001-b7d0651190a2\") " pod="kube-system/kube-proxy-tdqdj" Oct 8 19:58:36.698354 kubelet[2466]: I1008 19:58:36.698255 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/391bac30-bce2-411a-8001-b7d0651190a2-lib-modules\") pod \"kube-proxy-tdqdj\" (UID: \"391bac30-bce2-411a-8001-b7d0651190a2\") " pod="kube-system/kube-proxy-tdqdj" Oct 8 19:58:36.698354 kubelet[2466]: I1008 19:58:36.698282 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbgc\" (UniqueName: \"kubernetes.io/projected/391bac30-bce2-411a-8001-b7d0651190a2-kube-api-access-zpbgc\") pod \"kube-proxy-tdqdj\" (UID: \"391bac30-bce2-411a-8001-b7d0651190a2\") " pod="kube-system/kube-proxy-tdqdj" Oct 8 19:58:36.812570 kubelet[2466]: E1008 19:58:36.812536 2466 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 19:58:36.812570 kubelet[2466]: E1008 19:58:36.812568 2466 projected.go:194] Error preparing data for projected volume kube-api-access-zpbgc for pod kube-system/kube-proxy-tdqdj: configmap "kube-root-ca.crt" not found Oct 8 19:58:36.812723 kubelet[2466]: E1008 19:58:36.812619 2466 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/391bac30-bce2-411a-8001-b7d0651190a2-kube-api-access-zpbgc podName:391bac30-bce2-411a-8001-b7d0651190a2 nodeName:}" failed. No retries permitted until 2024-10-08 19:58:37.312600826 +0000 UTC m=+6.328403736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zpbgc" (UniqueName: "kubernetes.io/projected/391bac30-bce2-411a-8001-b7d0651190a2-kube-api-access-zpbgc") pod "kube-proxy-tdqdj" (UID: "391bac30-bce2-411a-8001-b7d0651190a2") : configmap "kube-root-ca.crt" not found Oct 8 19:58:37.100829 kubelet[2466]: I1008 19:58:37.100704 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jgt\" (UniqueName: \"kubernetes.io/projected/46f9a99f-9a15-4a41-997e-908193edd2c8-kube-api-access-s6jgt\") pod \"tigera-operator-55748b469f-tmdnz\" (UID: \"46f9a99f-9a15-4a41-997e-908193edd2c8\") " pod="tigera-operator/tigera-operator-55748b469f-tmdnz" Oct 8 19:58:37.101857 kubelet[2466]: I1008 19:58:37.101480 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46f9a99f-9a15-4a41-997e-908193edd2c8-var-lib-calico\") pod \"tigera-operator-55748b469f-tmdnz\" (UID: \"46f9a99f-9a15-4a41-997e-908193edd2c8\") " pod="tigera-operator/tigera-operator-55748b469f-tmdnz" Oct 8 19:58:37.102939 kubelet[2466]: E1008 19:58:37.102530 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:37.106274 systemd[1]: Created slice kubepods-besteffort-pod46f9a99f_9a15_4a41_997e_908193edd2c8.slice - libcontainer container kubepods-besteffort-pod46f9a99f_9a15_4a41_997e_908193edd2c8.slice. Oct 8 19:58:37.409951 containerd[1439]: time="2024-10-08T19:58:37.409783173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-tmdnz,Uid:46f9a99f-9a15-4a41-997e-908193edd2c8,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:58:37.442274 containerd[1439]: time="2024-10-08T19:58:37.442125106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:37.442274 containerd[1439]: time="2024-10-08T19:58:37.442176612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:37.442274 containerd[1439]: time="2024-10-08T19:58:37.442187578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:37.442682 containerd[1439]: time="2024-10-08T19:58:37.442267338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:37.466063 systemd[1]: Started cri-containerd-f12e7f51b8db5bf15ed40e6606024ad562db27895367050e4d59b715ec687ead.scope - libcontainer container f12e7f51b8db5bf15ed40e6606024ad562db27895367050e4d59b715ec687ead. Oct 8 19:58:37.502860 containerd[1439]: time="2024-10-08T19:58:37.502686365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-tmdnz,Uid:46f9a99f-9a15-4a41-997e-908193edd2c8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f12e7f51b8db5bf15ed40e6606024ad562db27895367050e4d59b715ec687ead\"" Oct 8 19:58:37.505836 containerd[1439]: time="2024-10-08T19:58:37.505807225Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:58:37.592221 kubelet[2466]: E1008 19:58:37.590466 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:37.592344 containerd[1439]: time="2024-10-08T19:58:37.591162276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tdqdj,Uid:391bac30-bce2-411a-8001-b7d0651190a2,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:37.618445 containerd[1439]: time="2024-10-08T19:58:37.618142615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:37.618445 containerd[1439]: time="2024-10-08T19:58:37.618199844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:37.618445 containerd[1439]: time="2024-10-08T19:58:37.618214932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:37.618445 containerd[1439]: time="2024-10-08T19:58:37.618289970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:37.640063 systemd[1]: Started cri-containerd-94627c54ac8b08d62d65e8cf67c0fec39c3bb86a95a872e0fbb6678e87636264.scope - libcontainer container 94627c54ac8b08d62d65e8cf67c0fec39c3bb86a95a872e0fbb6678e87636264. Oct 8 19:58:37.667493 containerd[1439]: time="2024-10-08T19:58:37.667363213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tdqdj,Uid:391bac30-bce2-411a-8001-b7d0651190a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"94627c54ac8b08d62d65e8cf67c0fec39c3bb86a95a872e0fbb6678e87636264\"" Oct 8 19:58:37.669047 kubelet[2466]: E1008 19:58:37.668889 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:37.671524 containerd[1439]: time="2024-10-08T19:58:37.671471133Z" level=info msg="CreateContainer within sandbox \"94627c54ac8b08d62d65e8cf67c0fec39c3bb86a95a872e0fbb6678e87636264\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:58:37.693690 containerd[1439]: time="2024-10-08T19:58:37.693629991Z" level=info msg="CreateContainer within sandbox \"94627c54ac8b08d62d65e8cf67c0fec39c3bb86a95a872e0fbb6678e87636264\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15de90b3450853903fc29d588db2a73fec5e1a566b6865cbfe18a0b406f3a5ce\"" Oct 8 19:58:37.694424 containerd[1439]: time="2024-10-08T19:58:37.694398339Z" level=info msg="StartContainer for \"15de90b3450853903fc29d588db2a73fec5e1a566b6865cbfe18a0b406f3a5ce\"" Oct 8 19:58:37.716026 systemd[1]: Started cri-containerd-15de90b3450853903fc29d588db2a73fec5e1a566b6865cbfe18a0b406f3a5ce.scope - libcontainer container 15de90b3450853903fc29d588db2a73fec5e1a566b6865cbfe18a0b406f3a5ce. Oct 8 19:58:37.747343 containerd[1439]: time="2024-10-08T19:58:37.746896637Z" level=info msg="StartContainer for \"15de90b3450853903fc29d588db2a73fec5e1a566b6865cbfe18a0b406f3a5ce\" returns successfully" Oct 8 19:58:38.108059 kubelet[2466]: E1008 19:58:38.107424 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:38.590247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196745807.mount: Deactivated successfully. Oct 8 19:58:39.280841 containerd[1439]: time="2024-10-08T19:58:39.280036749Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:39.280841 containerd[1439]: time="2024-10-08T19:58:39.280796887Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485931" Oct 8 19:58:39.281504 containerd[1439]: time="2024-10-08T19:58:39.281465705Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:39.291984 containerd[1439]: time="2024-10-08T19:58:39.291929441Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:39.292885 containerd[1439]: time="2024-10-08T19:58:39.292643599Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.786676293s" Oct 8 19:58:39.292885 containerd[1439]: time="2024-10-08T19:58:39.292672772Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:58:39.299735 containerd[1439]: time="2024-10-08T19:58:39.299691495Z" level=info msg="CreateContainer within sandbox \"f12e7f51b8db5bf15ed40e6606024ad562db27895367050e4d59b715ec687ead\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:58:39.311625 containerd[1439]: time="2024-10-08T19:58:39.311567499Z" level=info msg="CreateContainer within sandbox \"f12e7f51b8db5bf15ed40e6606024ad562db27895367050e4d59b715ec687ead\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4770f54ee80af011f3375873b5fe582d16fba61ae8757c1dab7358e8d5c99d58\"" Oct 8 19:58:39.314358 containerd[1439]: time="2024-10-08T19:58:39.312614205Z" level=info msg="StartContainer for \"4770f54ee80af011f3375873b5fe582d16fba61ae8757c1dab7358e8d5c99d58\"" Oct 8 19:58:39.340019 systemd[1]: Started cri-containerd-4770f54ee80af011f3375873b5fe582d16fba61ae8757c1dab7358e8d5c99d58.scope - libcontainer container 4770f54ee80af011f3375873b5fe582d16fba61ae8757c1dab7358e8d5c99d58. Oct 8 19:58:39.366485 containerd[1439]: time="2024-10-08T19:58:39.366440837Z" level=info msg="StartContainer for \"4770f54ee80af011f3375873b5fe582d16fba61ae8757c1dab7358e8d5c99d58\" returns successfully" Oct 8 19:58:40.124648 kubelet[2466]: I1008 19:58:40.124342 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tdqdj" podStartSLOduration=4.124323736 podStartE2EDuration="4.124323736s" podCreationTimestamp="2024-10-08 19:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:38.121049651 +0000 UTC m=+7.136852601" watchObservedRunningTime="2024-10-08 19:58:40.124323736 +0000 UTC m=+9.140126686" Oct 8 19:58:40.342686 kubelet[2466]: E1008 19:58:40.341787 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:40.357180 kubelet[2466]: I1008 19:58:40.357083 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-tmdnz" podStartSLOduration=1.5645667319999998 podStartE2EDuration="3.357068991s" podCreationTimestamp="2024-10-08 19:58:37 +0000 UTC" firstStartedPulling="2024-10-08 19:58:37.505083819 +0000 UTC m=+6.520886769" lastFinishedPulling="2024-10-08 19:58:39.297586118 +0000 UTC m=+8.313389028" observedRunningTime="2024-10-08 19:58:40.125067326 +0000 UTC m=+9.140870276" watchObservedRunningTime="2024-10-08 19:58:40.357068991 +0000 UTC m=+9.372871941" Oct 8 19:58:41.115924 kubelet[2466]: E1008 19:58:41.115176 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:43.497987 systemd[1]: Created slice kubepods-besteffort-pod81dae5ac_eded_4782_b6ac_431797844e47.slice - libcontainer container kubepods-besteffort-pod81dae5ac_eded_4782_b6ac_431797844e47.slice. Oct 8 19:58:43.557615 systemd[1]: Created slice kubepods-besteffort-pod6ce87119_6462_4eec_842d_ef44ee31e6a3.slice - libcontainer container kubepods-besteffort-pod6ce87119_6462_4eec_842d_ef44ee31e6a3.slice. Oct 8 19:58:43.642216 kubelet[2466]: I1008 19:58:43.642083 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81dae5ac-eded-4782-b6ac-431797844e47-tigera-ca-bundle\") pod \"calico-typha-7b454f87dd-vzqhg\" (UID: \"81dae5ac-eded-4782-b6ac-431797844e47\") " pod="calico-system/calico-typha-7b454f87dd-vzqhg" Oct 8 19:58:43.642216 kubelet[2466]: I1008 19:58:43.642123 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqtt\" (UniqueName: \"kubernetes.io/projected/81dae5ac-eded-4782-b6ac-431797844e47-kube-api-access-fgqtt\") pod \"calico-typha-7b454f87dd-vzqhg\" (UID: \"81dae5ac-eded-4782-b6ac-431797844e47\") " pod="calico-system/calico-typha-7b454f87dd-vzqhg" Oct 8 19:58:43.642216 kubelet[2466]: I1008 19:58:43.642146 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/81dae5ac-eded-4782-b6ac-431797844e47-typha-certs\") pod \"calico-typha-7b454f87dd-vzqhg\" (UID: \"81dae5ac-eded-4782-b6ac-431797844e47\") " pod="calico-system/calico-typha-7b454f87dd-vzqhg" Oct 8 19:58:43.673950 kubelet[2466]: E1008 19:58:43.673623 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ggmcg" podUID="199cac1a-3d1f-4713-aec1-c124cb5e48d4" Oct 8 19:58:43.743411 kubelet[2466]: I1008 19:58:43.742704 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-cni-bin-dir\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743411 kubelet[2466]: I1008 19:58:43.742744 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-cni-log-dir\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743411 kubelet[2466]: I1008 19:58:43.742760 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/199cac1a-3d1f-4713-aec1-c124cb5e48d4-registration-dir\") pod \"csi-node-driver-ggmcg\" (UID: \"199cac1a-3d1f-4713-aec1-c124cb5e48d4\") " pod="calico-system/csi-node-driver-ggmcg" Oct 8 19:58:43.743411 kubelet[2466]: I1008 19:58:43.742799 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ce87119-6462-4eec-842d-ef44ee31e6a3-tigera-ca-bundle\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743411 kubelet[2466]: I1008 19:58:43.742814 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-cni-net-dir\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743626 kubelet[2466]: I1008 19:58:43.742828 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r478\" (UniqueName: \"kubernetes.io/projected/199cac1a-3d1f-4713-aec1-c124cb5e48d4-kube-api-access-8r478\") pod \"csi-node-driver-ggmcg\" (UID: \"199cac1a-3d1f-4713-aec1-c124cb5e48d4\") " pod="calico-system/csi-node-driver-ggmcg" Oct 8 19:58:43.743626 kubelet[2466]: I1008 19:58:43.742845 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-policysync\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743626 kubelet[2466]: I1008 19:58:43.742859 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-var-run-calico\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743626 kubelet[2466]: I1008 19:58:43.742906 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-var-lib-calico\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743626 kubelet[2466]: I1008 19:58:43.742922 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-flexvol-driver-host\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743733 kubelet[2466]: I1008 19:58:43.742948 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/199cac1a-3d1f-4713-aec1-c124cb5e48d4-varrun\") pod \"csi-node-driver-ggmcg\" (UID: \"199cac1a-3d1f-4713-aec1-c124cb5e48d4\") " pod="calico-system/csi-node-driver-ggmcg" Oct 8 19:58:43.743733 kubelet[2466]: I1008 19:58:43.742965 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-lib-modules\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743733 kubelet[2466]: I1008 19:58:43.742981 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6ce87119-6462-4eec-842d-ef44ee31e6a3-node-certs\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743733 kubelet[2466]: I1008 19:58:43.742997 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/199cac1a-3d1f-4713-aec1-c124cb5e48d4-socket-dir\") pod \"csi-node-driver-ggmcg\" (UID: \"199cac1a-3d1f-4713-aec1-c124cb5e48d4\") " pod="calico-system/csi-node-driver-ggmcg" Oct 8 19:58:43.743733 kubelet[2466]: I1008 19:58:43.743013 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ce87119-6462-4eec-842d-ef44ee31e6a3-xtables-lock\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743834 kubelet[2466]: I1008 19:58:43.743047 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx9gh\" (UniqueName: \"kubernetes.io/projected/6ce87119-6462-4eec-842d-ef44ee31e6a3-kube-api-access-xx9gh\") pod \"calico-node-nvqdx\" (UID: \"6ce87119-6462-4eec-842d-ef44ee31e6a3\") " pod="calico-system/calico-node-nvqdx" Oct 8 19:58:43.743834 kubelet[2466]: I1008 19:58:43.743061 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/199cac1a-3d1f-4713-aec1-c124cb5e48d4-kubelet-dir\") pod \"csi-node-driver-ggmcg\" (UID: \"199cac1a-3d1f-4713-aec1-c124cb5e48d4\") " pod="calico-system/csi-node-driver-ggmcg" Oct 8 19:58:43.803244 kubelet[2466]: E1008 19:58:43.803131 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:43.805225 containerd[1439]: time="2024-10-08T19:58:43.805185849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b454f87dd-vzqhg,Uid:81dae5ac-eded-4782-b6ac-431797844e47,Namespace:calico-system,Attempt:0,}" Oct 8 19:58:43.831389 containerd[1439]: time="2024-10-08T19:58:43.830316729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:43.831953 containerd[1439]: time="2024-10-08T19:58:43.831855498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:43.831953 containerd[1439]: time="2024-10-08T19:58:43.831912397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:43.832122 containerd[1439]: time="2024-10-08T19:58:43.832070812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:43.845028 kubelet[2466]: E1008 19:58:43.844997 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.849069 kubelet[2466]: W1008 19:58:43.848910 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.849069 kubelet[2466]: E1008 19:58:43.848960 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.849245 kubelet[2466]: E1008 19:58:43.849230 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.849400 kubelet[2466]: W1008 19:58:43.849290 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.849400 kubelet[2466]: E1008 19:58:43.849372 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.849559 kubelet[2466]: E1008 19:58:43.849546 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.849618 kubelet[2466]: W1008 19:58:43.849607 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.849737 kubelet[2466]: E1008 19:58:43.849715 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.850424 kubelet[2466]: E1008 19:58:43.849950 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.850424 kubelet[2466]: W1008 19:58:43.849972 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.850525 kubelet[2466]: E1008 19:58:43.850463 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.851499 kubelet[2466]: E1008 19:58:43.851390 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.851499 kubelet[2466]: W1008 19:58:43.851419 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.851623 kubelet[2466]: E1008 19:58:43.851566 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.852606 kubelet[2466]: E1008 19:58:43.852113 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.852606 kubelet[2466]: W1008 19:58:43.852128 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.852817 kubelet[2466]: E1008 19:58:43.852780 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.853648 kubelet[2466]: E1008 19:58:43.853611 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.853831 kubelet[2466]: W1008 19:58:43.853728 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.853831 kubelet[2466]: E1008 19:58:43.853794 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.853989 kubelet[2466]: E1008 19:58:43.853973 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.854129 kubelet[2466]: W1008 19:58:43.854037 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.854129 kubelet[2466]: E1008 19:58:43.854089 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.854426 kubelet[2466]: E1008 19:58:43.854411 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.854589 kubelet[2466]: W1008 19:58:43.854472 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.854589 kubelet[2466]: E1008 19:58:43.854538 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.854808 kubelet[2466]: E1008 19:58:43.854793 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.855009 kubelet[2466]: W1008 19:58:43.854917 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.855009 kubelet[2466]: E1008 19:58:43.854979 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.855227 kubelet[2466]: E1008 19:58:43.855141 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.855227 kubelet[2466]: W1008 19:58:43.855154 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.855505 kubelet[2466]: E1008 19:58:43.855490 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.855572 kubelet[2466]: W1008 19:58:43.855561 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.855697 kubelet[2466]: E1008 19:58:43.855578 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.855697 kubelet[2466]: E1008 19:58:43.855672 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.855996 kubelet[2466]: E1008 19:58:43.855886 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.855996 kubelet[2466]: W1008 19:58:43.855899 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.856088 kubelet[2466]: E1008 19:58:43.856029 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.856604 kubelet[2466]: E1008 19:58:43.856587 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.856788 kubelet[2466]: W1008 19:58:43.856682 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.856788 kubelet[2466]: E1008 19:58:43.856762 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.857023 kubelet[2466]: E1008 19:58:43.857008 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.857093 kubelet[2466]: W1008 19:58:43.857082 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.857208 kubelet[2466]: E1008 19:58:43.857181 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.857546 kubelet[2466]: E1008 19:58:43.857455 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.857546 kubelet[2466]: W1008 19:58:43.857467 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.857546 kubelet[2466]: E1008 19:58:43.857512 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.857757 kubelet[2466]: E1008 19:58:43.857741 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.857814 kubelet[2466]: W1008 19:58:43.857803 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.857978 kubelet[2466]: E1008 19:58:43.857948 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.858722 kubelet[2466]: E1008 19:58:43.858627 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.858722 kubelet[2466]: W1008 19:58:43.858643 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.858722 kubelet[2466]: E1008 19:58:43.858692 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.859034 kubelet[2466]: E1008 19:58:43.859000 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.859034 kubelet[2466]: W1008 19:58:43.859017 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.859216 kubelet[2466]: E1008 19:58:43.859161 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.859375 kubelet[2466]: E1008 19:58:43.859361 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.859506 kubelet[2466]: W1008 19:58:43.859435 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.859506 kubelet[2466]: E1008 19:58:43.859501 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.859770 kubelet[2466]: E1008 19:58:43.859756 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.859914 kubelet[2466]: W1008 19:58:43.859822 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.859914 kubelet[2466]: E1008 19:58:43.859854 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.860820 kubelet[2466]: E1008 19:58:43.860800 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.861654 kubelet[2466]: W1008 19:58:43.861623 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.862049 kubelet[2466]: E1008 19:58:43.861974 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.862147 kubelet[2466]: W1008 19:58:43.862134 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.862330 kubelet[2466]: E1008 19:58:43.862050 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.862442 kubelet[2466]: E1008 19:58:43.862427 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.863982 kubelet[2466]: E1008 19:58:43.863957 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.864110 kubelet[2466]: W1008 19:58:43.864067 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.864110 kubelet[2466]: E1008 19:58:43.864086 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.865053 systemd[1]: Started cri-containerd-cbfc257fb1ac294ee93f9ff22513e4f4cf20391b0f2ea5bb418c68de6b8d138f.scope - libcontainer container cbfc257fb1ac294ee93f9ff22513e4f4cf20391b0f2ea5bb418c68de6b8d138f. Oct 8 19:58:43.872033 kubelet[2466]: E1008 19:58:43.872007 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:43.872033 kubelet[2466]: W1008 19:58:43.872026 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:43.872153 kubelet[2466]: E1008 19:58:43.872042 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:43.901718 containerd[1439]: time="2024-10-08T19:58:43.901638847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b454f87dd-vzqhg,Uid:81dae5ac-eded-4782-b6ac-431797844e47,Namespace:calico-system,Attempt:0,} returns sandbox id \"cbfc257fb1ac294ee93f9ff22513e4f4cf20391b0f2ea5bb418c68de6b8d138f\"" Oct 8 19:58:43.902742 kubelet[2466]: E1008 19:58:43.902714 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:43.903778 containerd[1439]: time="2024-10-08T19:58:43.903746971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:58:44.160541 kubelet[2466]: E1008 19:58:44.160325 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:44.161986 containerd[1439]: time="2024-10-08T19:58:44.161948334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nvqdx,Uid:6ce87119-6462-4eec-842d-ef44ee31e6a3,Namespace:calico-system,Attempt:0,}" Oct 8 19:58:44.184306 containerd[1439]: time="2024-10-08T19:58:44.183964149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:44.184306 containerd[1439]: time="2024-10-08T19:58:44.184020808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:44.184306 containerd[1439]: time="2024-10-08T19:58:44.184045976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:44.184306 containerd[1439]: time="2024-10-08T19:58:44.184152010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:44.208033 systemd[1]: Started cri-containerd-a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0.scope - libcontainer container a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0. Oct 8 19:58:44.234766 containerd[1439]: time="2024-10-08T19:58:44.232888077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nvqdx,Uid:6ce87119-6462-4eec-842d-ef44ee31e6a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0\"" Oct 8 19:58:44.234997 kubelet[2466]: E1008 19:58:44.233601 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:44.353379 kubelet[2466]: E1008 19:58:44.353159 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:44.448793 kubelet[2466]: E1008 19:58:44.448688 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:44.448793 kubelet[2466]: W1008 19:58:44.448714 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:44.448793 kubelet[2466]: E1008 19:58:44.448734 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:44.449030 kubelet[2466]: E1008 19:58:44.448969 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:44.449030 kubelet[2466]: W1008 19:58:44.448978 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:44.449030 kubelet[2466]: E1008 19:58:44.448988 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:44.449244 kubelet[2466]: E1008 19:58:44.449170 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:44.449244 kubelet[2466]: W1008 19:58:44.449185 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:44.449244 kubelet[2466]: E1008 19:58:44.449194 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:44.449575 kubelet[2466]: E1008 19:58:44.449376 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:44.449575 kubelet[2466]: W1008 19:58:44.449385 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:44.449575 kubelet[2466]: E1008 19:58:44.449393 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:44.449645 kubelet[2466]: E1008 19:58:44.449599 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:44.449645 kubelet[2466]: W1008 19:58:44.449607 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:44.449645 kubelet[2466]: E1008 19:58:44.449617 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:45.096945 update_engine[1427]: I20241008 19:58:45.096851 1427 update_attempter.cc:509] Updating boot flags... Oct 8 19:58:45.157562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2978) Oct 8 19:58:45.172904 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2976) Oct 8 19:58:45.220064 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2976) Oct 8 19:58:45.466195 containerd[1439]: time="2024-10-08T19:58:45.466073910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:45.467773 containerd[1439]: time="2024-10-08T19:58:45.467599571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:58:45.468933 containerd[1439]: time="2024-10-08T19:58:45.468892962Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:45.472291 containerd[1439]: time="2024-10-08T19:58:45.472233451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:45.477512 containerd[1439]: time="2024-10-08T19:58:45.476287876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.572325671s" Oct 8 19:58:45.477512 containerd[1439]: time="2024-10-08T19:58:45.476334010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:58:45.480489 containerd[1439]: time="2024-10-08T19:58:45.480430968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:58:45.507772 containerd[1439]: time="2024-10-08T19:58:45.507731817Z" level=info msg="CreateContainer within sandbox \"cbfc257fb1ac294ee93f9ff22513e4f4cf20391b0f2ea5bb418c68de6b8d138f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:58:45.538782 containerd[1439]: time="2024-10-08T19:58:45.538724901Z" level=info msg="CreateContainer within sandbox \"cbfc257fb1ac294ee93f9ff22513e4f4cf20391b0f2ea5bb418c68de6b8d138f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"caf63b4a8a7c083c28db4337d61aa000412710b6b78564291f357e9e92be73d7\"" Oct 8 19:58:45.539352 containerd[1439]: time="2024-10-08T19:58:45.539328364Z" level=info msg="StartContainer for \"caf63b4a8a7c083c28db4337d61aa000412710b6b78564291f357e9e92be73d7\"" Oct 8 19:58:45.573059 systemd[1]: Started cri-containerd-caf63b4a8a7c083c28db4337d61aa000412710b6b78564291f357e9e92be73d7.scope - libcontainer container caf63b4a8a7c083c28db4337d61aa000412710b6b78564291f357e9e92be73d7. Oct 8 19:58:45.619715 containerd[1439]: time="2024-10-08T19:58:45.619209900Z" level=info msg="StartContainer for \"caf63b4a8a7c083c28db4337d61aa000412710b6b78564291f357e9e92be73d7\" returns successfully" Oct 8 19:58:46.083541 kubelet[2466]: E1008 19:58:46.083144 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ggmcg" podUID="199cac1a-3d1f-4713-aec1-c124cb5e48d4" Oct 8 19:58:46.142010 kubelet[2466]: E1008 19:58:46.141973 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:46.164665 kubelet[2466]: E1008 19:58:46.164628 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.164665 kubelet[2466]: W1008 19:58:46.164654 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.164665 kubelet[2466]: E1008 19:58:46.164673 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.164941 kubelet[2466]: E1008 19:58:46.164928 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.165012 kubelet[2466]: W1008 19:58:46.164999 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.165053 kubelet[2466]: E1008 19:58:46.165015 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.165285 kubelet[2466]: E1008 19:58:46.165270 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.165285 kubelet[2466]: W1008 19:58:46.165285 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.165606 kubelet[2466]: E1008 19:58:46.165297 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.165606 kubelet[2466]: E1008 19:58:46.165592 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.165606 kubelet[2466]: W1008 19:58:46.165603 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.165699 kubelet[2466]: E1008 19:58:46.165615 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.166169 kubelet[2466]: E1008 19:58:46.165854 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.166169 kubelet[2466]: W1008 19:58:46.165895 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.166169 kubelet[2466]: E1008 19:58:46.165911 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.166169 kubelet[2466]: E1008 19:58:46.166087 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.166169 kubelet[2466]: W1008 19:58:46.166096 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.166169 kubelet[2466]: E1008 19:58:46.166104 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.166484 kubelet[2466]: E1008 19:58:46.166467 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.166484 kubelet[2466]: W1008 19:58:46.166482 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.166568 kubelet[2466]: E1008 19:58:46.166493 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.166721 kubelet[2466]: E1008 19:58:46.166708 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.166721 kubelet[2466]: W1008 19:58:46.166720 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.166787 kubelet[2466]: E1008 19:58:46.166730 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.173741 kubelet[2466]: E1008 19:58:46.173719 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.173741 kubelet[2466]: W1008 19:58:46.173736 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.173855 kubelet[2466]: E1008 19:58:46.173748 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.174019 kubelet[2466]: E1008 19:58:46.173952 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.174019 kubelet[2466]: W1008 19:58:46.173979 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.174019 kubelet[2466]: E1008 19:58:46.173988 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.174167 kubelet[2466]: E1008 19:58:46.174155 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.174167 kubelet[2466]: W1008 19:58:46.174165 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.174219 kubelet[2466]: E1008 19:58:46.174173 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.174350 kubelet[2466]: E1008 19:58:46.174328 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.174350 kubelet[2466]: W1008 19:58:46.174339 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.174350 kubelet[2466]: E1008 19:58:46.174347 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.174498 kubelet[2466]: E1008 19:58:46.174487 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.174521 kubelet[2466]: W1008 19:58:46.174498 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.174521 kubelet[2466]: E1008 19:58:46.174506 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.174646 kubelet[2466]: E1008 19:58:46.174637 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.174671 kubelet[2466]: W1008 19:58:46.174646 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.174671 kubelet[2466]: E1008 19:58:46.174653 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.174951 kubelet[2466]: E1008 19:58:46.174934 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.174995 kubelet[2466]: W1008 19:58:46.174952 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.174995 kubelet[2466]: E1008 19:58:46.174963 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.211165 kubelet[2466]: I1008 19:58:46.211075 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b454f87dd-vzqhg" podStartSLOduration=1.634323295 podStartE2EDuration="3.211055678s" podCreationTimestamp="2024-10-08 19:58:43 +0000 UTC" firstStartedPulling="2024-10-08 19:58:43.903448429 +0000 UTC m=+12.919251379" lastFinishedPulling="2024-10-08 19:58:45.480180812 +0000 UTC m=+14.495983762" observedRunningTime="2024-10-08 19:58:46.210807728 +0000 UTC m=+15.226610678" watchObservedRunningTime="2024-10-08 19:58:46.211055678 +0000 UTC m=+15.226858628" Oct 8 19:58:46.265125 kubelet[2466]: E1008 19:58:46.265093 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.265125 kubelet[2466]: W1008 19:58:46.265121 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.265288 kubelet[2466]: E1008 19:58:46.265139 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.265514 kubelet[2466]: E1008 19:58:46.265498 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.265559 kubelet[2466]: W1008 19:58:46.265515 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.265559 kubelet[2466]: E1008 19:58:46.265533 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.266252 kubelet[2466]: E1008 19:58:46.266230 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.266252 kubelet[2466]: W1008 19:58:46.266251 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.266326 kubelet[2466]: E1008 19:58:46.266273 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.266531 kubelet[2466]: E1008 19:58:46.266516 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.266565 kubelet[2466]: W1008 19:58:46.266532 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.266565 kubelet[2466]: E1008 19:58:46.266548 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.266807 kubelet[2466]: E1008 19:58:46.266786 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.266807 kubelet[2466]: W1008 19:58:46.266800 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.266944 kubelet[2466]: E1008 19:58:46.266922 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.267029 kubelet[2466]: E1008 19:58:46.267015 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.267029 kubelet[2466]: W1008 19:58:46.267027 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.267086 kubelet[2466]: E1008 19:58:46.267053 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.267292 kubelet[2466]: E1008 19:58:46.267265 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.267292 kubelet[2466]: W1008 19:58:46.267285 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.267629 kubelet[2466]: E1008 19:58:46.267350 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.267629 kubelet[2466]: E1008 19:58:46.267440 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.267629 kubelet[2466]: W1008 19:58:46.267448 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.267629 kubelet[2466]: E1008 19:58:46.267463 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.267782 kubelet[2466]: E1008 19:58:46.267676 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.267782 kubelet[2466]: W1008 19:58:46.267687 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.267782 kubelet[2466]: E1008 19:58:46.267703 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.267915 kubelet[2466]: E1008 19:58:46.267901 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.267915 kubelet[2466]: W1008 19:58:46.267914 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.267915 kubelet[2466]: E1008 19:58:46.267929 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.268221 kubelet[2466]: E1008 19:58:46.268074 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.268221 kubelet[2466]: W1008 19:58:46.268082 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.268221 kubelet[2466]: E1008 19:58:46.268095 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.268406 kubelet[2466]: E1008 19:58:46.268385 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.268440 kubelet[2466]: W1008 19:58:46.268401 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.268440 kubelet[2466]: E1008 19:58:46.268435 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.268966 kubelet[2466]: E1008 19:58:46.268934 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.268966 kubelet[2466]: W1008 19:58:46.268950 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.269065 kubelet[2466]: E1008 19:58:46.268980 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.271905 kubelet[2466]: E1008 19:58:46.269121 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.271905 kubelet[2466]: W1008 19:58:46.269134 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.271905 kubelet[2466]: E1008 19:58:46.269154 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.271905 kubelet[2466]: E1008 19:58:46.269296 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.271905 kubelet[2466]: W1008 19:58:46.269304 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.271905 kubelet[2466]: E1008 19:58:46.269320 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.271905 kubelet[2466]: E1008 19:58:46.269502 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.271905 kubelet[2466]: W1008 19:58:46.269516 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.271905 kubelet[2466]: E1008 19:58:46.269524 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.271905 kubelet[2466]: E1008 19:58:46.269676 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.272216 kubelet[2466]: W1008 19:58:46.269690 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.272216 kubelet[2466]: E1008 19:58:46.269700 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.272216 kubelet[2466]: E1008 19:58:46.270161 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:58:46.272216 kubelet[2466]: W1008 19:58:46.270173 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:58:46.272216 kubelet[2466]: E1008 19:58:46.270183 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:58:46.487227 containerd[1439]: time="2024-10-08T19:58:46.487106876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:46.489942 containerd[1439]: time="2024-10-08T19:58:46.489897546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:58:46.491394 containerd[1439]: time="2024-10-08T19:58:46.491335474Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:46.494986 containerd[1439]: time="2024-10-08T19:58:46.494941495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:46.496244 containerd[1439]: time="2024-10-08T19:58:46.496188208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.01572139s" Oct 8 19:58:46.496244 containerd[1439]: time="2024-10-08T19:58:46.496230020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:58:46.498583 containerd[1439]: time="2024-10-08T19:58:46.498543675Z" level=info msg="CreateContainer within sandbox \"a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:58:46.516137 containerd[1439]: time="2024-10-08T19:58:46.516075882Z" level=info msg="CreateContainer within sandbox \"a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717\"" Oct 8 19:58:46.517028 containerd[1439]: time="2024-10-08T19:58:46.516938806Z" level=info msg="StartContainer for \"74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717\"" Oct 8 19:58:46.545043 systemd[1]: Started cri-containerd-74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717.scope - libcontainer container 74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717. Oct 8 19:58:46.577649 containerd[1439]: time="2024-10-08T19:58:46.576781198Z" level=info msg="StartContainer for \"74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717\" returns successfully" Oct 8 19:58:46.605969 systemd[1]: cri-containerd-74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717.scope: Deactivated successfully. Oct 8 19:58:46.656412 containerd[1439]: time="2024-10-08T19:58:46.649396888Z" level=info msg="shim disconnected" id=74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717 namespace=k8s.io Oct 8 19:58:46.656412 containerd[1439]: time="2024-10-08T19:58:46.656416596Z" level=warning msg="cleaning up after shim disconnected" id=74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717 namespace=k8s.io Oct 8 19:58:46.656625 containerd[1439]: time="2024-10-08T19:58:46.656432601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:46.771168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74fe88d1b1f240261cc75d8cf38f74e64427b3a015bc4d5ff2106cbc5aba3717-rootfs.mount: Deactivated successfully. Oct 8 19:58:47.145387 kubelet[2466]: I1008 19:58:47.144980 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:58:47.145387 kubelet[2466]: E1008 19:58:47.145181 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:47.145387 kubelet[2466]: E1008 19:58:47.145269 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:47.146825 containerd[1439]: time="2024-10-08T19:58:47.146586240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:58:48.081470 kubelet[2466]: E1008 19:58:48.081402 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ggmcg" podUID="199cac1a-3d1f-4713-aec1-c124cb5e48d4" Oct 8 19:58:50.080950 kubelet[2466]: E1008 19:58:50.080807 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ggmcg" podUID="199cac1a-3d1f-4713-aec1-c124cb5e48d4" Oct 8 19:58:50.143641 containerd[1439]: time="2024-10-08T19:58:50.143588903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:50.144649 containerd[1439]: time="2024-10-08T19:58:50.144604765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:58:50.145439 containerd[1439]: time="2024-10-08T19:58:50.145405541Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:50.147468 containerd[1439]: time="2024-10-08T19:58:50.147427143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:50.148123 containerd[1439]: time="2024-10-08T19:58:50.148088488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.001458556s" Oct 8 19:58:50.148166 containerd[1439]: time="2024-10-08T19:58:50.148126056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:58:50.151531 containerd[1439]: time="2024-10-08T19:58:50.151493233Z" level=info msg="CreateContainer within sandbox \"a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:58:50.165875 containerd[1439]: time="2024-10-08T19:58:50.165824049Z" level=info msg="CreateContainer within sandbox \"a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f\"" Oct 8 19:58:50.166439 containerd[1439]: time="2024-10-08T19:58:50.166411257Z" level=info msg="StartContainer for \"9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f\"" Oct 8 19:58:50.189056 systemd[1]: run-containerd-runc-k8s.io-9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f-runc.bWtYMP.mount: Deactivated successfully. Oct 8 19:58:50.206090 systemd[1]: Started cri-containerd-9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f.scope - libcontainer container 9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f. Oct 8 19:58:50.244718 containerd[1439]: time="2024-10-08T19:58:50.244471220Z" level=info msg="StartContainer for \"9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f\" returns successfully" Oct 8 19:58:50.819296 systemd[1]: cri-containerd-9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f.scope: Deactivated successfully. Oct 8 19:58:50.872739 containerd[1439]: time="2024-10-08T19:58:50.872676733Z" level=info msg="shim disconnected" id=9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f namespace=k8s.io Oct 8 19:58:50.872739 containerd[1439]: time="2024-10-08T19:58:50.872733425Z" level=warning msg="cleaning up after shim disconnected" id=9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f namespace=k8s.io Oct 8 19:58:50.872739 containerd[1439]: time="2024-10-08T19:58:50.872742627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:50.892036 kubelet[2466]: I1008 19:58:50.891137 2466 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 19:58:50.927285 systemd[1]: Created slice kubepods-burstable-pod4a8304d9_5b66_4740_817a_39422665117d.slice - libcontainer container kubepods-burstable-pod4a8304d9_5b66_4740_817a_39422665117d.slice. Oct 8 19:58:50.935084 systemd[1]: Created slice kubepods-burstable-pod73b8d60e_382d_4e38_addd_253451caecf4.slice - libcontainer container kubepods-burstable-pod73b8d60e_382d_4e38_addd_253451caecf4.slice. Oct 8 19:58:50.940747 systemd[1]: Created slice kubepods-besteffort-podccb1d4f9_9ed3_446a_bf6d_201aa89f5d74.slice - libcontainer container kubepods-besteffort-podccb1d4f9_9ed3_446a_bf6d_201aa89f5d74.slice. Oct 8 19:58:51.099877 kubelet[2466]: I1008 19:58:51.099817 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc6l6\" (UniqueName: \"kubernetes.io/projected/4a8304d9-5b66-4740-817a-39422665117d-kube-api-access-qc6l6\") pod \"coredns-6f6b679f8f-lccm5\" (UID: \"4a8304d9-5b66-4740-817a-39422665117d\") " pod="kube-system/coredns-6f6b679f8f-lccm5" Oct 8 19:58:51.100230 kubelet[2466]: I1008 19:58:51.099882 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a8304d9-5b66-4740-817a-39422665117d-config-volume\") pod \"coredns-6f6b679f8f-lccm5\" (UID: \"4a8304d9-5b66-4740-817a-39422665117d\") " pod="kube-system/coredns-6f6b679f8f-lccm5" Oct 8 19:58:51.100230 kubelet[2466]: I1008 19:58:51.099915 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbgz4\" (UniqueName: \"kubernetes.io/projected/ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74-kube-api-access-mbgz4\") pod \"calico-kube-controllers-6d6d468b64-wq8lg\" (UID: \"ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74\") " pod="calico-system/calico-kube-controllers-6d6d468b64-wq8lg" Oct 8 19:58:51.100230 kubelet[2466]: I1008 19:58:51.099941 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9nl\" (UniqueName: \"kubernetes.io/projected/73b8d60e-382d-4e38-addd-253451caecf4-kube-api-access-rn9nl\") pod \"coredns-6f6b679f8f-s4w46\" (UID: \"73b8d60e-382d-4e38-addd-253451caecf4\") " pod="kube-system/coredns-6f6b679f8f-s4w46" Oct 8 19:58:51.100230 kubelet[2466]: I1008 19:58:51.099958 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74-tigera-ca-bundle\") pod \"calico-kube-controllers-6d6d468b64-wq8lg\" (UID: \"ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74\") " pod="calico-system/calico-kube-controllers-6d6d468b64-wq8lg" Oct 8 19:58:51.100230 kubelet[2466]: I1008 19:58:51.099975 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b8d60e-382d-4e38-addd-253451caecf4-config-volume\") pod \"coredns-6f6b679f8f-s4w46\" (UID: \"73b8d60e-382d-4e38-addd-253451caecf4\") " pod="kube-system/coredns-6f6b679f8f-s4w46" Oct 8 19:58:51.153529 kubelet[2466]: E1008 19:58:51.153492 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:51.155072 containerd[1439]: time="2024-10-08T19:58:51.155031420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:58:51.160461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fc6659d490871fb6866f751ef4d0fb57a1af7c882e8b374506074ed268a8d0f-rootfs.mount: Deactivated successfully. Oct 8 19:58:51.233375 kubelet[2466]: E1008 19:58:51.233331 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:51.234045 containerd[1439]: time="2024-10-08T19:58:51.234008223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lccm5,Uid:4a8304d9-5b66-4740-817a-39422665117d,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:51.239232 kubelet[2466]: E1008 19:58:51.239203 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:51.240577 containerd[1439]: time="2024-10-08T19:58:51.240546804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s4w46,Uid:73b8d60e-382d-4e38-addd-253451caecf4,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:51.245802 containerd[1439]: time="2024-10-08T19:58:51.245553592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d468b64-wq8lg,Uid:ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74,Namespace:calico-system,Attempt:0,}" Oct 8 19:58:51.580719 kubelet[2466]: I1008 19:58:51.580690 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:58:51.591038 kubelet[2466]: E1008 19:58:51.589969 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:51.642805 containerd[1439]: time="2024-10-08T19:58:51.642691829Z" level=error msg="Failed to destroy network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.644848 containerd[1439]: time="2024-10-08T19:58:51.644805902Z" level=error msg="encountered an error cleaning up failed sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.644976 containerd[1439]: time="2024-10-08T19:58:51.644908483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s4w46,Uid:73b8d60e-382d-4e38-addd-253451caecf4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.649783 kubelet[2466]: E1008 19:58:51.649733 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.649906 kubelet[2466]: E1008 19:58:51.649806 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-s4w46" Oct 8 19:58:51.649906 kubelet[2466]: E1008 19:58:51.649855 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-s4w46" Oct 8 19:58:51.649975 kubelet[2466]: E1008 19:58:51.649906 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-s4w46_kube-system(73b8d60e-382d-4e38-addd-253451caecf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-s4w46_kube-system(73b8d60e-382d-4e38-addd-253451caecf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-s4w46" podUID="73b8d60e-382d-4e38-addd-253451caecf4" Oct 8 19:58:51.656294 containerd[1439]: time="2024-10-08T19:58:51.656243689Z" level=error msg="Failed to destroy network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.657415 containerd[1439]: time="2024-10-08T19:58:51.657369200Z" level=error msg="encountered an error cleaning up failed sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.657514 containerd[1439]: time="2024-10-08T19:58:51.657440094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lccm5,Uid:4a8304d9-5b66-4740-817a-39422665117d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.657684 kubelet[2466]: E1008 19:58:51.657646 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.657729 kubelet[2466]: E1008 19:58:51.657706 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lccm5" Oct 8 19:58:51.657769 kubelet[2466]: E1008 19:58:51.657726 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lccm5" Oct 8 19:58:51.657806 kubelet[2466]: E1008 19:58:51.657767 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lccm5_kube-system(4a8304d9-5b66-4740-817a-39422665117d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lccm5_kube-system(4a8304d9-5b66-4740-817a-39422665117d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lccm5" podUID="4a8304d9-5b66-4740-817a-39422665117d" Oct 8 19:58:51.661882 containerd[1439]: time="2024-10-08T19:58:51.661156177Z" level=error msg="Failed to destroy network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.664275 containerd[1439]: time="2024-10-08T19:58:51.664236569Z" level=error msg="encountered an error cleaning up failed sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.664352 containerd[1439]: time="2024-10-08T19:58:51.664293500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d468b64-wq8lg,Uid:ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.664588 kubelet[2466]: E1008 19:58:51.664550 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:51.664647 kubelet[2466]: E1008 19:58:51.664600 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6d468b64-wq8lg" Oct 8 19:58:51.664647 kubelet[2466]: E1008 19:58:51.664620 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6d468b64-wq8lg" Oct 8 19:58:51.664699 kubelet[2466]: E1008 19:58:51.664662 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d6d468b64-wq8lg_calico-system(ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d6d468b64-wq8lg_calico-system(ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6d468b64-wq8lg" podUID="ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74" Oct 8 19:58:52.087238 systemd[1]: Created slice kubepods-besteffort-pod199cac1a_3d1f_4713_aec1_c124cb5e48d4.slice - libcontainer container kubepods-besteffort-pod199cac1a_3d1f_4713_aec1_c124cb5e48d4.slice. Oct 8 19:58:52.095797 containerd[1439]: time="2024-10-08T19:58:52.095746534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ggmcg,Uid:199cac1a-3d1f-4713-aec1-c124cb5e48d4,Namespace:calico-system,Attempt:0,}" Oct 8 19:58:52.160633 kubelet[2466]: I1008 19:58:52.160587 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:58:52.162213 containerd[1439]: time="2024-10-08T19:58:52.162164309Z" level=info msg="StopPodSandbox for \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\"" Oct 8 19:58:52.167125 containerd[1439]: time="2024-10-08T19:58:52.162397433Z" level=info msg="Ensure that sandbox a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe in task-service has been cleanup successfully" Oct 8 19:58:52.167125 containerd[1439]: time="2024-10-08T19:58:52.164846705Z" level=info msg="StopPodSandbox for \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\"" Oct 8 19:58:52.167125 containerd[1439]: time="2024-10-08T19:58:52.165046143Z" level=info msg="Ensure that sandbox 250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd in task-service has been cleanup successfully" Oct 8 19:58:52.167207 kubelet[2466]: I1008 19:58:52.164130 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:58:52.168132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374-shm.mount: Deactivated successfully. Oct 8 19:58:52.169788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd-shm.mount: Deactivated successfully. Oct 8 19:58:52.169858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe-shm.mount: Deactivated successfully. Oct 8 19:58:52.171761 kubelet[2466]: E1008 19:58:52.170399 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:52.171761 kubelet[2466]: I1008 19:58:52.170481 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:58:52.171934 containerd[1439]: time="2024-10-08T19:58:52.171471019Z" level=info msg="StopPodSandbox for \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\"" Oct 8 19:58:52.171934 containerd[1439]: time="2024-10-08T19:58:52.171650293Z" level=info msg="Ensure that sandbox 40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374 in task-service has been cleanup successfully" Oct 8 19:58:52.226250 containerd[1439]: time="2024-10-08T19:58:52.226127971Z" level=error msg="StopPodSandbox for \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\" failed" error="failed to destroy network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:52.226680 kubelet[2466]: E1008 19:58:52.226506 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:58:52.226680 kubelet[2466]: E1008 19:58:52.226568 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd"} Oct 8 19:58:52.226680 kubelet[2466]: E1008 19:58:52.226629 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:58:52.226680 kubelet[2466]: E1008 19:58:52.226650 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6d468b64-wq8lg" podUID="ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74" Oct 8 19:58:52.230193 containerd[1439]: time="2024-10-08T19:58:52.230150505Z" level=error msg="StopPodSandbox for \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\" failed" error="failed to destroy network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:52.230486 kubelet[2466]: E1008 19:58:52.230351 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:58:52.230486 kubelet[2466]: E1008 19:58:52.230401 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe"} Oct 8 19:58:52.230486 kubelet[2466]: E1008 19:58:52.230430 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a8304d9-5b66-4740-817a-39422665117d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:58:52.230486 kubelet[2466]: E1008 19:58:52.230458 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a8304d9-5b66-4740-817a-39422665117d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lccm5" podUID="4a8304d9-5b66-4740-817a-39422665117d" Oct 8 19:58:52.232067 containerd[1439]: time="2024-10-08T19:58:52.231998221Z" level=error msg="Failed to destroy network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:52.233860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2-shm.mount: Deactivated successfully. Oct 8 19:58:52.234959 containerd[1439]: time="2024-10-08T19:58:52.232432504Z" level=error msg="encountered an error cleaning up failed sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:52.237016 containerd[1439]: time="2024-10-08T19:58:52.236948173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ggmcg,Uid:199cac1a-3d1f-4713-aec1-c124cb5e48d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:52.240778 kubelet[2466]: E1008 19:58:52.237212 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:52.240862 kubelet[2466]: E1008 19:58:52.240809 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ggmcg" Oct 8 19:58:52.240862 kubelet[2466]: E1008 19:58:52.240831 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ggmcg" Oct 8 19:58:52.240927 kubelet[2466]: E1008 19:58:52.240892 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ggmcg_calico-system(199cac1a-3d1f-4713-aec1-c124cb5e48d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ggmcg_calico-system(199cac1a-3d1f-4713-aec1-c124cb5e48d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ggmcg" podUID="199cac1a-3d1f-4713-aec1-c124cb5e48d4" Oct 8 19:58:52.242664 containerd[1439]: time="2024-10-08T19:58:52.242625505Z" level=error msg="StopPodSandbox for \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\" failed" error="failed to destroy network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:52.242963 kubelet[2466]: E1008 19:58:52.242823 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:58:52.242963 kubelet[2466]: E1008 19:58:52.242862 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374"} Oct 8 19:58:52.242963 kubelet[2466]: E1008 19:58:52.242900 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73b8d60e-382d-4e38-addd-253451caecf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:58:52.242963 kubelet[2466]: E1008 19:58:52.242919 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73b8d60e-382d-4e38-addd-253451caecf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-s4w46" podUID="73b8d60e-382d-4e38-addd-253451caecf4" Oct 8 19:58:53.180834 kubelet[2466]: I1008 19:58:53.180790 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:58:53.181654 containerd[1439]: time="2024-10-08T19:58:53.181528562Z" level=info msg="StopPodSandbox for \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\"" Oct 8 19:58:53.181902 containerd[1439]: time="2024-10-08T19:58:53.181716516Z" level=info msg="Ensure that sandbox 13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2 in task-service has been cleanup successfully" Oct 8 19:58:53.214292 containerd[1439]: time="2024-10-08T19:58:53.214120039Z" level=error msg="StopPodSandbox for \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\" failed" error="failed to destroy network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:58:53.214432 kubelet[2466]: E1008 19:58:53.214363 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:58:53.214432 kubelet[2466]: E1008 19:58:53.214421 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2"} Oct 8 19:58:53.214510 kubelet[2466]: E1008 19:58:53.214455 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"199cac1a-3d1f-4713-aec1-c124cb5e48d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:58:53.214510 kubelet[2466]: E1008 19:58:53.214477 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"199cac1a-3d1f-4713-aec1-c124cb5e48d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ggmcg" podUID="199cac1a-3d1f-4713-aec1-c124cb5e48d4" Oct 8 19:58:53.963906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2259211192.mount: Deactivated successfully. Oct 8 19:58:54.248694 containerd[1439]: time="2024-10-08T19:58:54.248568781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:54.249087 containerd[1439]: time="2024-10-08T19:58:54.249055223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:58:54.250023 containerd[1439]: time="2024-10-08T19:58:54.249967937Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:54.251928 containerd[1439]: time="2024-10-08T19:58:54.251897743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:54.252895 containerd[1439]: time="2024-10-08T19:58:54.252402069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.09732796s" Oct 8 19:58:54.252895 containerd[1439]: time="2024-10-08T19:58:54.252436194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:58:54.274435 containerd[1439]: time="2024-10-08T19:58:54.274372343Z" level=info msg="CreateContainer within sandbox \"a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:58:54.287777 containerd[1439]: time="2024-10-08T19:58:54.287722480Z" level=info msg="CreateContainer within sandbox \"a8c0d87cd1b962f5adbc66147f8227b8103f0fa1f3ae8c7d4d65de4e9458d3b0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ea37bd1ef3efc35bfddbb0d4f9be14985ac1af70ea1c067a7fa8d914dc9f3643\"" Oct 8 19:58:54.289165 containerd[1439]: time="2024-10-08T19:58:54.289126597Z" level=info msg="StartContainer for \"ea37bd1ef3efc35bfddbb0d4f9be14985ac1af70ea1c067a7fa8d914dc9f3643\"" Oct 8 19:58:54.344024 systemd[1]: Started cri-containerd-ea37bd1ef3efc35bfddbb0d4f9be14985ac1af70ea1c067a7fa8d914dc9f3643.scope - libcontainer container ea37bd1ef3efc35bfddbb0d4f9be14985ac1af70ea1c067a7fa8d914dc9f3643. Oct 8 19:58:54.371894 containerd[1439]: time="2024-10-08T19:58:54.371744044Z" level=info msg="StartContainer for \"ea37bd1ef3efc35bfddbb0d4f9be14985ac1af70ea1c067a7fa8d914dc9f3643\" returns successfully" Oct 8 19:58:54.587707 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:58:54.587850 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:58:55.186832 kubelet[2466]: E1008 19:58:55.186781 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:55.223875 kubelet[2466]: I1008 19:58:55.223783 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nvqdx" podStartSLOduration=2.204741444 podStartE2EDuration="12.223764446s" podCreationTimestamp="2024-10-08 19:58:43 +0000 UTC" firstStartedPulling="2024-10-08 19:58:44.234111711 +0000 UTC m=+13.249914661" lastFinishedPulling="2024-10-08 19:58:54.253134713 +0000 UTC m=+23.268937663" observedRunningTime="2024-10-08 19:58:55.222883947 +0000 UTC m=+24.238686897" watchObservedRunningTime="2024-10-08 19:58:55.223764446 +0000 UTC m=+24.239567396" Oct 8 19:58:56.135049 kernel: bpftool[3668]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:58:56.189015 kubelet[2466]: E1008 19:58:56.188977 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:56.207412 systemd[1]: run-containerd-runc-k8s.io-ea37bd1ef3efc35bfddbb0d4f9be14985ac1af70ea1c067a7fa8d914dc9f3643-runc.VnYW14.mount: Deactivated successfully. Oct 8 19:58:56.311285 systemd-networkd[1381]: vxlan.calico: Link UP Oct 8 19:58:56.311486 systemd-networkd[1381]: vxlan.calico: Gained carrier Oct 8 19:58:57.837919 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Oct 8 19:59:03.082517 containerd[1439]: time="2024-10-08T19:59:03.082391683Z" level=info msg="StopPodSandbox for \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\"" Oct 8 19:59:03.082517 containerd[1439]: time="2024-10-08T19:59:03.082454409Z" level=info msg="StopPodSandbox for \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\"" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.204 [INFO][3804] k8s.go 608: Cleaning up netns ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.206 [INFO][3804] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" iface="eth0" netns="/var/run/netns/cni-869b0c32-c421-dae1-c41a-aba105c7e401" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.207 [INFO][3804] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" iface="eth0" netns="/var/run/netns/cni-869b0c32-c421-dae1-c41a-aba105c7e401" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.207 [INFO][3804] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" iface="eth0" netns="/var/run/netns/cni-869b0c32-c421-dae1-c41a-aba105c7e401" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.207 [INFO][3804] k8s.go 615: Releasing IP address(es) ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.207 [INFO][3804] utils.go 188: Calico CNI releasing IP address ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.311 [INFO][3819] ipam_plugin.go 417: Releasing address using handleID ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.311 [INFO][3819] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.311 [INFO][3819] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.321 [WARNING][3819] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.321 [INFO][3819] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.322 [INFO][3819] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:03.326391 containerd[1439]: 2024-10-08 19:59:03.325 [INFO][3804] k8s.go 621: Teardown processing complete. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:03.327151 containerd[1439]: time="2024-10-08T19:59:03.326986777Z" level=info msg="TearDown network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\" successfully" Oct 8 19:59:03.327151 containerd[1439]: time="2024-10-08T19:59:03.327015940Z" level=info msg="StopPodSandbox for \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\" returns successfully" Oct 8 19:59:03.328600 kubelet[2466]: E1008 19:59:03.328148 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:03.328948 containerd[1439]: time="2024-10-08T19:59:03.328735822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s4w46,Uid:73b8d60e-382d-4e38-addd-253451caecf4,Namespace:kube-system,Attempt:1,}" Oct 8 19:59:03.332207 systemd[1]: run-netns-cni\x2d869b0c32\x2dc421\x2ddae1\x2dc41a\x2daba105c7e401.mount: Deactivated successfully. Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.208 [INFO][3805] k8s.go 608: Cleaning up netns ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.208 [INFO][3805] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" iface="eth0" netns="/var/run/netns/cni-5eb66def-85ca-14d6-4a01-7b148f6f0c0b" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.208 [INFO][3805] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" iface="eth0" netns="/var/run/netns/cni-5eb66def-85ca-14d6-4a01-7b148f6f0c0b" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.208 [INFO][3805] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" iface="eth0" netns="/var/run/netns/cni-5eb66def-85ca-14d6-4a01-7b148f6f0c0b" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.208 [INFO][3805] k8s.go 615: Releasing IP address(es) ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.208 [INFO][3805] utils.go 188: Calico CNI releasing IP address ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.311 [INFO][3820] ipam_plugin.go 417: Releasing address using handleID ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.311 [INFO][3820] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.322 [INFO][3820] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.333 [WARNING][3820] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.333 [INFO][3820] ipam_plugin.go 445: Releasing address using workloadID ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.334 [INFO][3820] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:03.338174 containerd[1439]: 2024-10-08 19:59:03.336 [INFO][3805] k8s.go 621: Teardown processing complete. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:03.339083 containerd[1439]: time="2024-10-08T19:59:03.339041917Z" level=info msg="TearDown network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\" successfully" Oct 8 19:59:03.339083 containerd[1439]: time="2024-10-08T19:59:03.339070680Z" level=info msg="StopPodSandbox for \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\" returns successfully" Oct 8 19:59:03.340038 containerd[1439]: time="2024-10-08T19:59:03.339946283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d468b64-wq8lg,Uid:ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74,Namespace:calico-system,Attempt:1,}" Oct 8 19:59:03.340463 systemd[1]: run-netns-cni\x2d5eb66def\x2d85ca\x2d14d6\x2d4a01\x2d7b148f6f0c0b.mount: Deactivated successfully. Oct 8 19:59:03.475505 systemd-networkd[1381]: cali96181c4f4fb: Link UP Oct 8 19:59:03.476197 systemd-networkd[1381]: cali96181c4f4fb: Gained carrier Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.393 [INFO][3837] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--s4w46-eth0 coredns-6f6b679f8f- kube-system 73b8d60e-382d-4e38-addd-253451caecf4 689 0 2024-10-08 19:58:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-s4w46 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96181c4f4fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Namespace="kube-system" Pod="coredns-6f6b679f8f-s4w46" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--s4w46-" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.393 [INFO][3837] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Namespace="kube-system" Pod="coredns-6f6b679f8f-s4w46" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.421 [INFO][3864] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" HandleID="k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.434 [INFO][3864] ipam_plugin.go 270: Auto assigning IP ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" HandleID="k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000278300), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-s4w46", "timestamp":"2024-10-08 19:59:03.421776342 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.434 [INFO][3864] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.434 [INFO][3864] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.434 [INFO][3864] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.438 [INFO][3864] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.452 [INFO][3864] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.456 [INFO][3864] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.457 [INFO][3864] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.459 [INFO][3864] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.459 [INFO][3864] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.461 [INFO][3864] ipam.go 1685: Creating new handle: k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4 Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.464 [INFO][3864] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.469 [INFO][3864] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.469 [INFO][3864] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" host="localhost" Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.469 [INFO][3864] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:03.489594 containerd[1439]: 2024-10-08 19:59:03.469 [INFO][3864] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" HandleID="k8s-pod-network.2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.490115 containerd[1439]: 2024-10-08 19:59:03.471 [INFO][3837] k8s.go 386: Populated endpoint ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Namespace="kube-system" Pod="coredns-6f6b679f8f-s4w46" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--s4w46-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"73b8d60e-382d-4e38-addd-253451caecf4", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-s4w46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96181c4f4fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:03.490115 containerd[1439]: 2024-10-08 19:59:03.472 [INFO][3837] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Namespace="kube-system" Pod="coredns-6f6b679f8f-s4w46" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.490115 containerd[1439]: 2024-10-08 19:59:03.472 [INFO][3837] dataplane_linux.go 68: Setting the host side veth name to cali96181c4f4fb ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Namespace="kube-system" Pod="coredns-6f6b679f8f-s4w46" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.490115 containerd[1439]: 2024-10-08 19:59:03.476 [INFO][3837] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Namespace="kube-system" Pod="coredns-6f6b679f8f-s4w46" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.490115 containerd[1439]: 2024-10-08 19:59:03.477 [INFO][3837] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Namespace="kube-system" Pod="coredns-6f6b679f8f-s4w46" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--s4w46-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"73b8d60e-382d-4e38-addd-253451caecf4", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4", Pod:"coredns-6f6b679f8f-s4w46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96181c4f4fb", MAC:"32:dc:3b:64:65:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:03.490115 containerd[1439]: 2024-10-08 19:59:03.485 [INFO][3837] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4" Namespace="kube-system" Pod="coredns-6f6b679f8f-s4w46" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:03.514664 containerd[1439]: time="2024-10-08T19:59:03.514559917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:59:03.514664 containerd[1439]: time="2024-10-08T19:59:03.514616683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:59:03.514664 containerd[1439]: time="2024-10-08T19:59:03.514627924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:03.514977 containerd[1439]: time="2024-10-08T19:59:03.514724493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:03.530057 systemd[1]: Started cri-containerd-2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4.scope - libcontainer container 2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4. Oct 8 19:59:03.540943 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:59:03.577993 systemd-networkd[1381]: calicb4ee56a745: Link UP Oct 8 19:59:03.578944 systemd-networkd[1381]: calicb4ee56a745: Gained carrier Oct 8 19:59:03.593579 containerd[1439]: time="2024-10-08T19:59:03.593366811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s4w46,Uid:73b8d60e-382d-4e38-addd-253451caecf4,Namespace:kube-system,Attempt:1,} returns sandbox id \"2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4\"" Oct 8 19:59:03.595371 kubelet[2466]: E1008 19:59:03.595145 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:03.597323 containerd[1439]: time="2024-10-08T19:59:03.597176131Z" level=info msg="CreateContainer within sandbox \"2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.393 [INFO][3848] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0 calico-kube-controllers-6d6d468b64- calico-system ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74 690 0 2024-10-08 19:58:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d6d468b64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d6d468b64-wq8lg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicb4ee56a745 [] []}} ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Namespace="calico-system" Pod="calico-kube-controllers-6d6d468b64-wq8lg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.393 [INFO][3848] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Namespace="calico-system" Pod="calico-kube-controllers-6d6d468b64-wq8lg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.420 [INFO][3865] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" HandleID="k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.434 [INFO][3865] ipam_plugin.go 270: Auto assigning IP ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" HandleID="k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fbe20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d6d468b64-wq8lg", "timestamp":"2024-10-08 19:59:03.420082502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.434 [INFO][3865] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.469 [INFO][3865] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.469 [INFO][3865] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.539 [INFO][3865] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.545 [INFO][3865] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.558 [INFO][3865] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.560 [INFO][3865] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.562 [INFO][3865] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.562 [INFO][3865] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.564 [INFO][3865] ipam.go 1685: Creating new handle: k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120 Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.567 [INFO][3865] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.573 [INFO][3865] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.573 [INFO][3865] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" host="localhost" Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.573 [INFO][3865] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:03.609376 containerd[1439]: 2024-10-08 19:59:03.573 [INFO][3865] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" HandleID="k8s-pod-network.6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.610504 containerd[1439]: 2024-10-08 19:59:03.575 [INFO][3848] k8s.go 386: Populated endpoint ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Namespace="calico-system" Pod="calico-kube-controllers-6d6d468b64-wq8lg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0", GenerateName:"calico-kube-controllers-6d6d468b64-", Namespace:"calico-system", SelfLink:"", UID:"ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6d468b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d6d468b64-wq8lg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb4ee56a745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:03.610504 containerd[1439]: 2024-10-08 19:59:03.575 [INFO][3848] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Namespace="calico-system" Pod="calico-kube-controllers-6d6d468b64-wq8lg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.610504 containerd[1439]: 2024-10-08 19:59:03.575 [INFO][3848] dataplane_linux.go 68: Setting the host side veth name to calicb4ee56a745 ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Namespace="calico-system" Pod="calico-kube-controllers-6d6d468b64-wq8lg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.610504 containerd[1439]: 2024-10-08 19:59:03.579 [INFO][3848] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Namespace="calico-system" Pod="calico-kube-controllers-6d6d468b64-wq8lg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.610504 containerd[1439]: 2024-10-08 19:59:03.579 [INFO][3848] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Namespace="calico-system" Pod="calico-kube-controllers-6d6d468b64-wq8lg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0", GenerateName:"calico-kube-controllers-6d6d468b64-", Namespace:"calico-system", SelfLink:"", UID:"ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6d468b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120", Pod:"calico-kube-controllers-6d6d468b64-wq8lg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb4ee56a745", MAC:"fe:b6:08:4e:65:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:03.610504 containerd[1439]: 2024-10-08 19:59:03.605 [INFO][3848] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120" Namespace="calico-system" Pod="calico-kube-controllers-6d6d468b64-wq8lg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:03.668761 containerd[1439]: time="2024-10-08T19:59:03.668528680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:59:03.668761 containerd[1439]: time="2024-10-08T19:59:03.668603687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:59:03.668761 containerd[1439]: time="2024-10-08T19:59:03.668627729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:03.668761 containerd[1439]: time="2024-10-08T19:59:03.668714377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:03.685034 systemd[1]: Started cri-containerd-6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120.scope - libcontainer container 6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120. Oct 8 19:59:03.693587 containerd[1439]: time="2024-10-08T19:59:03.693546766Z" level=info msg="CreateContainer within sandbox \"2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5dc25c6b14f7c17299461b6bc34f0e36841e10dff998c18c71bfcd8ca1ab026\"" Oct 8 19:59:03.694501 containerd[1439]: time="2024-10-08T19:59:03.694473054Z" level=info msg="StartContainer for \"c5dc25c6b14f7c17299461b6bc34f0e36841e10dff998c18c71bfcd8ca1ab026\"" Oct 8 19:59:03.697007 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:59:03.715603 containerd[1439]: time="2024-10-08T19:59:03.715560328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6d468b64-wq8lg,Uid:ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74,Namespace:calico-system,Attempt:1,} returns sandbox id \"6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120\"" Oct 8 19:59:03.718005 containerd[1439]: time="2024-10-08T19:59:03.717580719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:59:03.720019 systemd[1]: Started cri-containerd-c5dc25c6b14f7c17299461b6bc34f0e36841e10dff998c18c71bfcd8ca1ab026.scope - libcontainer container c5dc25c6b14f7c17299461b6bc34f0e36841e10dff998c18c71bfcd8ca1ab026. Oct 8 19:59:03.743563 containerd[1439]: time="2024-10-08T19:59:03.741897579Z" level=info msg="StartContainer for \"c5dc25c6b14f7c17299461b6bc34f0e36841e10dff998c18c71bfcd8ca1ab026\" returns successfully" Oct 8 19:59:04.209285 kubelet[2466]: E1008 19:59:04.208820 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:04.220280 kubelet[2466]: I1008 19:59:04.220178 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-s4w46" podStartSLOduration=27.220160978 podStartE2EDuration="27.220160978s" podCreationTimestamp="2024-10-08 19:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:59:04.220097972 +0000 UTC m=+33.235900922" watchObservedRunningTime="2024-10-08 19:59:04.220160978 +0000 UTC m=+33.235963928" Oct 8 19:59:04.678990 systemd-networkd[1381]: calicb4ee56a745: Gained IPv6LL Oct 8 19:59:04.953998 containerd[1439]: time="2024-10-08T19:59:04.953846303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:04.955908 containerd[1439]: time="2024-10-08T19:59:04.955035297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:59:04.956284 containerd[1439]: time="2024-10-08T19:59:04.956238092Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:04.959896 containerd[1439]: time="2024-10-08T19:59:04.959818154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:04.960580 containerd[1439]: time="2024-10-08T19:59:04.960543463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.24292714s" Oct 8 19:59:04.960614 containerd[1439]: time="2024-10-08T19:59:04.960577266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:59:04.970609 containerd[1439]: time="2024-10-08T19:59:04.970563780Z" level=info msg="CreateContainer within sandbox \"6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:59:05.014890 containerd[1439]: time="2024-10-08T19:59:05.014760052Z" level=info msg="CreateContainer within sandbox \"6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1c33d73ed16c16f492a199100b0d17735f6a1bf47ef5406d28ff9a6a2346d935\"" Oct 8 19:59:05.015574 containerd[1439]: time="2024-10-08T19:59:05.015533420Z" level=info msg="StartContainer for \"1c33d73ed16c16f492a199100b0d17735f6a1bf47ef5406d28ff9a6a2346d935\"" Oct 8 19:59:05.060102 systemd[1]: Started cri-containerd-1c33d73ed16c16f492a199100b0d17735f6a1bf47ef5406d28ff9a6a2346d935.scope - libcontainer container 1c33d73ed16c16f492a199100b0d17735f6a1bf47ef5406d28ff9a6a2346d935. Oct 8 19:59:05.082030 containerd[1439]: time="2024-10-08T19:59:05.081954599Z" level=info msg="StopPodSandbox for \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\"" Oct 8 19:59:05.084258 containerd[1439]: time="2024-10-08T19:59:05.084018166Z" level=info msg="StopPodSandbox for \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\"" Oct 8 19:59:05.127109 systemd-networkd[1381]: cali96181c4f4fb: Gained IPv6LL Oct 8 19:59:05.183459 containerd[1439]: time="2024-10-08T19:59:05.183417381Z" level=info msg="StartContainer for \"1c33d73ed16c16f492a199100b0d17735f6a1bf47ef5406d28ff9a6a2346d935\" returns successfully" Oct 8 19:59:05.215900 kubelet[2466]: E1008 19:59:05.215736 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.182 [INFO][4099] k8s.go 608: Cleaning up netns ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.182 [INFO][4099] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" iface="eth0" netns="/var/run/netns/cni-6f6f6359-8030-1c13-ab73-487a7bd36c34" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.183 [INFO][4099] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" iface="eth0" netns="/var/run/netns/cni-6f6f6359-8030-1c13-ab73-487a7bd36c34" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.183 [INFO][4099] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" iface="eth0" netns="/var/run/netns/cni-6f6f6359-8030-1c13-ab73-487a7bd36c34" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.183 [INFO][4099] k8s.go 615: Releasing IP address(es) ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.183 [INFO][4099] utils.go 188: Calico CNI releasing IP address ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.212 [INFO][4127] ipam_plugin.go 417: Releasing address using handleID ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.212 [INFO][4127] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.212 [INFO][4127] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.223 [WARNING][4127] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.223 [INFO][4127] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.228 [INFO][4127] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:05.236901 containerd[1439]: 2024-10-08 19:59:05.230 [INFO][4099] k8s.go 621: Teardown processing complete. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:05.238234 containerd[1439]: time="2024-10-08T19:59:05.237978988Z" level=info msg="TearDown network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\" successfully" Oct 8 19:59:05.238234 containerd[1439]: time="2024-10-08T19:59:05.238012990Z" level=info msg="StopPodSandbox for \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\" returns successfully" Oct 8 19:59:05.239240 kubelet[2466]: E1008 19:59:05.239215 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:05.240913 containerd[1439]: time="2024-10-08T19:59:05.239901506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lccm5,Uid:4a8304d9-5b66-4740-817a-39422665117d,Namespace:kube-system,Attempt:1,}" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.209 [INFO][4098] k8s.go 608: Cleaning up netns ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.209 [INFO][4098] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" iface="eth0" netns="/var/run/netns/cni-52664b7b-205d-c64b-d98c-f57ff20293e0" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.209 [INFO][4098] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" iface="eth0" netns="/var/run/netns/cni-52664b7b-205d-c64b-d98c-f57ff20293e0" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.209 [INFO][4098] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" iface="eth0" netns="/var/run/netns/cni-52664b7b-205d-c64b-d98c-f57ff20293e0" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.210 [INFO][4098] k8s.go 615: Releasing IP address(es) ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.210 [INFO][4098] utils.go 188: Calico CNI releasing IP address ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.248 [INFO][4135] ipam_plugin.go 417: Releasing address using handleID ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.248 [INFO][4135] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.248 [INFO][4135] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.257 [WARNING][4135] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.258 [INFO][4135] ipam_plugin.go 445: Releasing address using workloadID ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.259 [INFO][4135] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:05.266824 containerd[1439]: 2024-10-08 19:59:05.262 [INFO][4098] k8s.go 621: Teardown processing complete. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:05.270579 containerd[1439]: time="2024-10-08T19:59:05.269207515Z" level=info msg="TearDown network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\" successfully" Oct 8 19:59:05.270579 containerd[1439]: time="2024-10-08T19:59:05.269244797Z" level=info msg="StopPodSandbox for \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\" returns successfully" Oct 8 19:59:05.271641 containerd[1439]: time="2024-10-08T19:59:05.270995305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ggmcg,Uid:199cac1a-3d1f-4713-aec1-c124cb5e48d4,Namespace:calico-system,Attempt:1,}" Oct 8 19:59:05.334788 systemd[1]: run-netns-cni\x2d52664b7b\x2d205d\x2dc64b\x2dd98c\x2df57ff20293e0.mount: Deactivated successfully. Oct 8 19:59:05.334904 systemd[1]: run-netns-cni\x2d6f6f6359\x2d8030\x2d1c13\x2dab73\x2d487a7bd36c34.mount: Deactivated successfully. Oct 8 19:59:05.404152 systemd-networkd[1381]: cali6af82b3a807: Link UP Oct 8 19:59:05.404344 systemd-networkd[1381]: cali6af82b3a807: Gained carrier Oct 8 19:59:05.413992 kubelet[2466]: I1008 19:59:05.413932 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d6d468b64-wq8lg" podStartSLOduration=21.169047921 podStartE2EDuration="22.413913245s" podCreationTimestamp="2024-10-08 19:58:43 +0000 UTC" firstStartedPulling="2024-10-08 19:59:03.716603547 +0000 UTC m=+32.732406457" lastFinishedPulling="2024-10-08 19:59:04.961468831 +0000 UTC m=+33.977271781" observedRunningTime="2024-10-08 19:59:05.237107454 +0000 UTC m=+34.252910404" watchObservedRunningTime="2024-10-08 19:59:05.413913245 +0000 UTC m=+34.429716195" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.305 [INFO][4144] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--lccm5-eth0 coredns-6f6b679f8f- kube-system 4a8304d9-5b66-4740-817a-39422665117d 747 0 2024-10-08 19:58:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-lccm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6af82b3a807 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Namespace="kube-system" Pod="coredns-6f6b679f8f-lccm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lccm5-" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.305 [INFO][4144] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Namespace="kube-system" Pod="coredns-6f6b679f8f-lccm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.342 [INFO][4171] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" HandleID="k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.358 [INFO][4171] ipam_plugin.go 270: Auto assigning IP ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" HandleID="k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f4c30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-lccm5", "timestamp":"2024-10-08 19:59:05.342887382 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.359 [INFO][4171] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.359 [INFO][4171] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.359 [INFO][4171] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.363 [INFO][4171] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.370 [INFO][4171] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.377 [INFO][4171] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.380 [INFO][4171] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.383 [INFO][4171] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.383 [INFO][4171] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.385 [INFO][4171] ipam.go 1685: Creating new handle: k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9 Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.389 [INFO][4171] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.395 [INFO][4171] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.396 [INFO][4171] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" host="localhost" Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.396 [INFO][4171] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:05.417929 containerd[1439]: 2024-10-08 19:59:05.396 [INFO][4171] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" HandleID="k8s-pod-network.33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.418580 containerd[1439]: 2024-10-08 19:59:05.401 [INFO][4144] k8s.go 386: Populated endpoint ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Namespace="kube-system" Pod="coredns-6f6b679f8f-lccm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--lccm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4a8304d9-5b66-4740-817a-39422665117d", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-lccm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6af82b3a807", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:05.418580 containerd[1439]: 2024-10-08 19:59:05.401 [INFO][4144] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Namespace="kube-system" Pod="coredns-6f6b679f8f-lccm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.418580 containerd[1439]: 2024-10-08 19:59:05.401 [INFO][4144] dataplane_linux.go 68: Setting the host side veth name to cali6af82b3a807 ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Namespace="kube-system" Pod="coredns-6f6b679f8f-lccm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.418580 containerd[1439]: 2024-10-08 19:59:05.404 [INFO][4144] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Namespace="kube-system" Pod="coredns-6f6b679f8f-lccm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.418580 containerd[1439]: 2024-10-08 19:59:05.404 [INFO][4144] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Namespace="kube-system" Pod="coredns-6f6b679f8f-lccm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--lccm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4a8304d9-5b66-4740-817a-39422665117d", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9", Pod:"coredns-6f6b679f8f-lccm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6af82b3a807", MAC:"12:9a:13:18:d7:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:05.418580 containerd[1439]: 2024-10-08 19:59:05.415 [INFO][4144] k8s.go 500: Wrote updated endpoint to datastore ContainerID="33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9" Namespace="kube-system" Pod="coredns-6f6b679f8f-lccm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:05.444988 containerd[1439]: time="2024-10-08T19:59:05.444442009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:59:05.444988 containerd[1439]: time="2024-10-08T19:59:05.444504813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:59:05.444988 containerd[1439]: time="2024-10-08T19:59:05.444520094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:05.444988 containerd[1439]: time="2024-10-08T19:59:05.444730467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:05.482033 systemd[1]: Started cri-containerd-33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9.scope - libcontainer container 33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9. Oct 8 19:59:05.483569 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:46072.service - OpenSSH per-connection server daemon (10.0.0.1:46072). Oct 8 19:59:05.513652 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:59:05.517317 systemd-networkd[1381]: calid4231df1edc: Link UP Oct 8 19:59:05.521171 systemd-networkd[1381]: calid4231df1edc: Gained carrier Oct 8 19:59:05.537265 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 46072 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:05.542406 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:05.544335 containerd[1439]: time="2024-10-08T19:59:05.544300531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lccm5,Uid:4a8304d9-5b66-4740-817a-39422665117d,Namespace:kube-system,Attempt:1,} returns sandbox id \"33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9\"" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.332 [INFO][4155] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ggmcg-eth0 csi-node-driver- calico-system 199cac1a-3d1f-4713-aec1-c124cb5e48d4 749 0 2024-10-08 19:58:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-ggmcg eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid4231df1edc [] []}} ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Namespace="calico-system" Pod="csi-node-driver-ggmcg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ggmcg-" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.332 [INFO][4155] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Namespace="calico-system" Pod="csi-node-driver-ggmcg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.375 [INFO][4180] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" HandleID="k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.386 [INFO][4180] ipam_plugin.go 270: Auto assigning IP ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" HandleID="k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003001e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ggmcg", "timestamp":"2024-10-08 19:59:05.375482153 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.386 [INFO][4180] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.396 [INFO][4180] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.396 [INFO][4180] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.462 [INFO][4180] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.472 [INFO][4180] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.481 [INFO][4180] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.488 [INFO][4180] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.491 [INFO][4180] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.491 [INFO][4180] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.492 [INFO][4180] ipam.go 1685: Creating new handle: k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721 Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.498 [INFO][4180] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.505 [INFO][4180] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.505 [INFO][4180] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" host="localhost" Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.505 [INFO][4180] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:05.545585 containerd[1439]: 2024-10-08 19:59:05.505 [INFO][4180] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" HandleID="k8s-pod-network.7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.546073 containerd[1439]: 2024-10-08 19:59:05.514 [INFO][4155] k8s.go 386: Populated endpoint ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Namespace="calico-system" Pod="csi-node-driver-ggmcg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ggmcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ggmcg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"199cac1a-3d1f-4713-aec1-c124cb5e48d4", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ggmcg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid4231df1edc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:05.546073 containerd[1439]: 2024-10-08 19:59:05.514 [INFO][4155] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Namespace="calico-system" Pod="csi-node-driver-ggmcg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.546073 containerd[1439]: 2024-10-08 19:59:05.514 [INFO][4155] dataplane_linux.go 68: Setting the host side veth name to calid4231df1edc ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Namespace="calico-system" Pod="csi-node-driver-ggmcg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.546073 containerd[1439]: 2024-10-08 19:59:05.520 [INFO][4155] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Namespace="calico-system" Pod="csi-node-driver-ggmcg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.546073 containerd[1439]: 2024-10-08 19:59:05.521 [INFO][4155] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Namespace="calico-system" Pod="csi-node-driver-ggmcg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ggmcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ggmcg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"199cac1a-3d1f-4713-aec1-c124cb5e48d4", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721", Pod:"csi-node-driver-ggmcg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid4231df1edc", MAC:"9a:ef:c9:1d:a7:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:05.546073 containerd[1439]: 2024-10-08 19:59:05.538 [INFO][4155] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721" Namespace="calico-system" Pod="csi-node-driver-ggmcg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:05.547855 kubelet[2466]: E1008 19:59:05.547815 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:05.552809 systemd-logind[1426]: New session 8 of user core. Oct 8 19:59:05.554898 containerd[1439]: time="2024-10-08T19:59:05.554412315Z" level=info msg="CreateContainer within sandbox \"33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:59:05.560141 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:59:05.576300 containerd[1439]: time="2024-10-08T19:59:05.576128136Z" level=info msg="CreateContainer within sandbox \"33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2066fa88588b2914045ecf2587dcaffb80eec449b80bfe99a03dc75b9f436233\"" Oct 8 19:59:05.578340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366125863.mount: Deactivated successfully. Oct 8 19:59:05.579307 containerd[1439]: time="2024-10-08T19:59:05.579153442Z" level=info msg="StartContainer for \"2066fa88588b2914045ecf2587dcaffb80eec449b80bfe99a03dc75b9f436233\"" Oct 8 19:59:05.591969 containerd[1439]: time="2024-10-08T19:59:05.591695896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:59:05.591969 containerd[1439]: time="2024-10-08T19:59:05.591788022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:59:05.591969 containerd[1439]: time="2024-10-08T19:59:05.591803583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:05.591969 containerd[1439]: time="2024-10-08T19:59:05.591918870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:05.615766 systemd[1]: Started cri-containerd-2066fa88588b2914045ecf2587dcaffb80eec449b80bfe99a03dc75b9f436233.scope - libcontainer container 2066fa88588b2914045ecf2587dcaffb80eec449b80bfe99a03dc75b9f436233. Oct 8 19:59:05.626027 systemd[1]: Started cri-containerd-7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721.scope - libcontainer container 7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721. Oct 8 19:59:05.649767 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:59:05.655016 containerd[1439]: time="2024-10-08T19:59:05.654977921Z" level=info msg="StartContainer for \"2066fa88588b2914045ecf2587dcaffb80eec449b80bfe99a03dc75b9f436233\" returns successfully" Oct 8 19:59:05.690455 containerd[1439]: time="2024-10-08T19:59:05.690372506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ggmcg,Uid:199cac1a-3d1f-4713-aec1-c124cb5e48d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721\"" Oct 8 19:59:05.692621 containerd[1439]: time="2024-10-08T19:59:05.692596243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:59:05.828038 sshd[4229]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:05.832784 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:46072.service: Deactivated successfully. Oct 8 19:59:05.834519 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:59:05.835181 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:59:05.836095 systemd-logind[1426]: Removed session 8. Oct 8 19:59:06.218587 kubelet[2466]: E1008 19:59:06.218294 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:06.221544 kubelet[2466]: I1008 19:59:06.221523 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:59:06.222149 kubelet[2466]: E1008 19:59:06.222107 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:06.247665 kubelet[2466]: I1008 19:59:06.246827 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lccm5" podStartSLOduration=29.246812103 podStartE2EDuration="29.246812103s" podCreationTimestamp="2024-10-08 19:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:59:06.229143803 +0000 UTC m=+35.244946753" watchObservedRunningTime="2024-10-08 19:59:06.246812103 +0000 UTC m=+35.262615053" Oct 8 19:59:06.562662 containerd[1439]: time="2024-10-08T19:59:06.562366196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:06.563178 containerd[1439]: time="2024-10-08T19:59:06.563047197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:59:06.564648 containerd[1439]: time="2024-10-08T19:59:06.564611731Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:06.579071 containerd[1439]: time="2024-10-08T19:59:06.579030116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:06.579665 containerd[1439]: time="2024-10-08T19:59:06.579619831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 886.989386ms" Oct 8 19:59:06.579665 containerd[1439]: time="2024-10-08T19:59:06.579652513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:59:06.584099 containerd[1439]: time="2024-10-08T19:59:06.584067058Z" level=info msg="CreateContainer within sandbox \"7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:59:06.639785 containerd[1439]: time="2024-10-08T19:59:06.639714077Z" level=info msg="CreateContainer within sandbox \"7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1c90bdb181e93d46560c582d866dd8f666ab7a2fbad5699ad0d92a2e00baeb9d\"" Oct 8 19:59:06.642323 containerd[1439]: time="2024-10-08T19:59:06.640917469Z" level=info msg="StartContainer for \"1c90bdb181e93d46560c582d866dd8f666ab7a2fbad5699ad0d92a2e00baeb9d\"" Oct 8 19:59:06.684060 systemd[1]: Started cri-containerd-1c90bdb181e93d46560c582d866dd8f666ab7a2fbad5699ad0d92a2e00baeb9d.scope - libcontainer container 1c90bdb181e93d46560c582d866dd8f666ab7a2fbad5699ad0d92a2e00baeb9d. Oct 8 19:59:06.721407 containerd[1439]: time="2024-10-08T19:59:06.721363535Z" level=info msg="StartContainer for \"1c90bdb181e93d46560c582d866dd8f666ab7a2fbad5699ad0d92a2e00baeb9d\" returns successfully" Oct 8 19:59:06.723506 containerd[1439]: time="2024-10-08T19:59:06.723472422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:59:07.224683 kubelet[2466]: E1008 19:59:07.224629 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:07.367092 systemd-networkd[1381]: calid4231df1edc: Gained IPv6LL Oct 8 19:59:07.431051 systemd-networkd[1381]: cali6af82b3a807: Gained IPv6LL Oct 8 19:59:07.545320 containerd[1439]: time="2024-10-08T19:59:07.544893323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:07.546430 containerd[1439]: time="2024-10-08T19:59:07.546368969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:59:07.547436 containerd[1439]: time="2024-10-08T19:59:07.547402910Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:07.549575 containerd[1439]: time="2024-10-08T19:59:07.549545154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:07.550296 containerd[1439]: time="2024-10-08T19:59:07.550173031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 826.658807ms" Oct 8 19:59:07.550296 containerd[1439]: time="2024-10-08T19:59:07.550201073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:59:07.554064 containerd[1439]: time="2024-10-08T19:59:07.554036016Z" level=info msg="CreateContainer within sandbox \"7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:59:07.581595 containerd[1439]: time="2024-10-08T19:59:07.581556822Z" level=info msg="CreateContainer within sandbox \"7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fa873f66155f5f39dfb11ca724725453763e6002374b24837807c4a92aa34ee5\"" Oct 8 19:59:07.583098 containerd[1439]: time="2024-10-08T19:59:07.581983967Z" level=info msg="StartContainer for \"fa873f66155f5f39dfb11ca724725453763e6002374b24837807c4a92aa34ee5\"" Oct 8 19:59:07.614037 systemd[1]: Started cri-containerd-fa873f66155f5f39dfb11ca724725453763e6002374b24837807c4a92aa34ee5.scope - libcontainer container fa873f66155f5f39dfb11ca724725453763e6002374b24837807c4a92aa34ee5. Oct 8 19:59:07.637561 containerd[1439]: time="2024-10-08T19:59:07.637496525Z" level=info msg="StartContainer for \"fa873f66155f5f39dfb11ca724725453763e6002374b24837807c4a92aa34ee5\" returns successfully" Oct 8 19:59:08.155276 kubelet[2466]: I1008 19:59:08.155226 2466 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:59:08.157110 kubelet[2466]: I1008 19:59:08.157077 2466 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:59:08.230474 kubelet[2466]: E1008 19:59:08.230441 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:08.241331 kubelet[2466]: I1008 19:59:08.241256 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ggmcg" podStartSLOduration=23.381267069 podStartE2EDuration="25.241240321s" podCreationTimestamp="2024-10-08 19:58:43 +0000 UTC" firstStartedPulling="2024-10-08 19:59:05.691534337 +0000 UTC m=+34.707337287" lastFinishedPulling="2024-10-08 19:59:07.551507589 +0000 UTC m=+36.567310539" observedRunningTime="2024-10-08 19:59:08.240481798 +0000 UTC m=+37.256284748" watchObservedRunningTime="2024-10-08 19:59:08.241240321 +0000 UTC m=+37.257043271" Oct 8 19:59:10.842241 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:46086.service - OpenSSH per-connection server daemon (10.0.0.1:46086). Oct 8 19:59:10.897956 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 46086 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:10.899519 sshd[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:10.903437 systemd-logind[1426]: New session 9 of user core. Oct 8 19:59:10.910011 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:59:11.135066 sshd[4453]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:11.140082 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:59:11.140375 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:46086.service: Deactivated successfully. Oct 8 19:59:11.144063 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:59:11.144850 systemd-logind[1426]: Removed session 9. Oct 8 19:59:11.842505 kubelet[2466]: I1008 19:59:11.842114 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:59:11.860433 systemd[1]: run-containerd-runc-k8s.io-1c33d73ed16c16f492a199100b0d17735f6a1bf47ef5406d28ff9a6a2346d935-runc.8VwbSE.mount: Deactivated successfully. Oct 8 19:59:16.150673 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:46148.service - OpenSSH per-connection server daemon (10.0.0.1:46148). Oct 8 19:59:16.190904 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 46148 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:16.192694 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:16.201108 systemd-logind[1426]: New session 10 of user core. Oct 8 19:59:16.217086 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:59:16.370822 sshd[4515]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:16.393747 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:46148.service: Deactivated successfully. Oct 8 19:59:16.397418 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:59:16.400062 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:59:16.404608 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:46164.service - OpenSSH per-connection server daemon (10.0.0.1:46164). Oct 8 19:59:16.406124 systemd-logind[1426]: Removed session 10. Oct 8 19:59:16.449045 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 46164 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:16.451040 sshd[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:16.457072 systemd-logind[1426]: New session 11 of user core. Oct 8 19:59:16.470067 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:59:16.709051 sshd[4531]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:16.719694 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:46164.service: Deactivated successfully. Oct 8 19:59:16.723349 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:59:16.724880 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:59:16.736214 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:46168.service - OpenSSH per-connection server daemon (10.0.0.1:46168). Oct 8 19:59:16.738538 systemd-logind[1426]: Removed session 11. Oct 8 19:59:16.774707 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 46168 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:16.776009 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:16.779942 systemd-logind[1426]: New session 12 of user core. Oct 8 19:59:16.791031 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:59:16.947491 sshd[4550]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:16.951247 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:46168.service: Deactivated successfully. Oct 8 19:59:16.953078 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:59:16.954711 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:59:16.955649 systemd-logind[1426]: Removed session 12. Oct 8 19:59:21.962546 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:46174.service - OpenSSH per-connection server daemon (10.0.0.1:46174). Oct 8 19:59:21.998877 sshd[4573]: Accepted publickey for core from 10.0.0.1 port 46174 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:22.000477 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:22.004263 systemd-logind[1426]: New session 13 of user core. Oct 8 19:59:22.012042 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:59:22.192887 sshd[4573]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:22.203183 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:46174.service: Deactivated successfully. Oct 8 19:59:22.204651 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:59:22.206370 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:59:22.213104 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:46184.service - OpenSSH per-connection server daemon (10.0.0.1:46184). Oct 8 19:59:22.214357 systemd-logind[1426]: Removed session 13. Oct 8 19:59:22.250004 sshd[4588]: Accepted publickey for core from 10.0.0.1 port 46184 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:22.251611 sshd[4588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:22.256237 systemd-logind[1426]: New session 14 of user core. Oct 8 19:59:22.267037 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:59:22.497909 sshd[4588]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:22.510359 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:46184.service: Deactivated successfully. Oct 8 19:59:22.511915 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:59:22.513093 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:59:22.514171 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:52928.service - OpenSSH per-connection server daemon (10.0.0.1:52928). Oct 8 19:59:22.517678 systemd-logind[1426]: Removed session 14. Oct 8 19:59:22.553603 sshd[4600]: Accepted publickey for core from 10.0.0.1 port 52928 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:22.554847 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:22.558526 systemd-logind[1426]: New session 15 of user core. Oct 8 19:59:22.570039 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:59:23.935118 sshd[4600]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:23.946058 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:52928.service: Deactivated successfully. Oct 8 19:59:23.950015 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:59:23.951057 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:59:23.962036 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:52930.service - OpenSSH per-connection server daemon (10.0.0.1:52930). Oct 8 19:59:23.963463 systemd-logind[1426]: Removed session 15. Oct 8 19:59:23.985134 kubelet[2466]: E1008 19:59:23.985100 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:24.001881 sshd[4643]: Accepted publickey for core from 10.0.0.1 port 52930 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:24.004092 sshd[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:24.010731 systemd-logind[1426]: New session 16 of user core. Oct 8 19:59:24.021066 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:59:24.319344 sshd[4643]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:24.328852 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:52930.service: Deactivated successfully. Oct 8 19:59:24.330732 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:59:24.332652 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:59:24.339428 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:52932.service - OpenSSH per-connection server daemon (10.0.0.1:52932). Oct 8 19:59:24.342554 systemd-logind[1426]: Removed session 16. Oct 8 19:59:24.374940 sshd[4658]: Accepted publickey for core from 10.0.0.1 port 52932 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:24.376338 sshd[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:24.381947 systemd-logind[1426]: New session 17 of user core. Oct 8 19:59:24.387048 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:59:24.532918 sshd[4658]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:24.536359 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:52932.service: Deactivated successfully. Oct 8 19:59:24.539500 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:59:24.540113 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:59:24.540897 systemd-logind[1426]: Removed session 17. Oct 8 19:59:29.543506 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:52946.service - OpenSSH per-connection server daemon (10.0.0.1:52946). Oct 8 19:59:29.579834 sshd[4679]: Accepted publickey for core from 10.0.0.1 port 52946 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:29.581321 sshd[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:29.585663 systemd-logind[1426]: New session 18 of user core. Oct 8 19:59:29.592039 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:59:29.718778 sshd[4679]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:29.722375 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:59:29.722680 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:52946.service: Deactivated successfully. Oct 8 19:59:29.725560 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:59:29.726685 systemd-logind[1426]: Removed session 18. Oct 8 19:59:31.065455 containerd[1439]: time="2024-10-08T19:59:31.065417913Z" level=info msg="StopPodSandbox for \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\"" Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.099 [WARNING][4711] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ggmcg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"199cac1a-3d1f-4713-aec1-c124cb5e48d4", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721", Pod:"csi-node-driver-ggmcg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid4231df1edc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.100 [INFO][4711] k8s.go 608: Cleaning up netns ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.100 [INFO][4711] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" iface="eth0" netns="" Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.100 [INFO][4711] k8s.go 615: Releasing IP address(es) ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.100 [INFO][4711] utils.go 188: Calico CNI releasing IP address ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.124 [INFO][4721] ipam_plugin.go 417: Releasing address using handleID ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.124 [INFO][4721] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.124 [INFO][4721] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.133 [WARNING][4721] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.133 [INFO][4721] ipam_plugin.go 445: Releasing address using workloadID ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.134 [INFO][4721] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:31.137589 containerd[1439]: 2024-10-08 19:59:31.135 [INFO][4711] k8s.go 621: Teardown processing complete. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:31.137589 containerd[1439]: time="2024-10-08T19:59:31.137440297Z" level=info msg="TearDown network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\" successfully" Oct 8 19:59:31.137589 containerd[1439]: time="2024-10-08T19:59:31.137463218Z" level=info msg="StopPodSandbox for \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\" returns successfully" Oct 8 19:59:31.138625 containerd[1439]: time="2024-10-08T19:59:31.138044916Z" level=info msg="RemovePodSandbox for \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\"" Oct 8 19:59:31.142292 containerd[1439]: time="2024-10-08T19:59:31.142097761Z" level=info msg="Forcibly stopping sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\"" Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.176 [WARNING][4744] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ggmcg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"199cac1a-3d1f-4713-aec1-c124cb5e48d4", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7222a5c86081f956598d8f81bdb5ccba94a19307b07421767d5b2695f4a64721", Pod:"csi-node-driver-ggmcg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid4231df1edc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.176 [INFO][4744] k8s.go 608: Cleaning up netns ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.176 [INFO][4744] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" iface="eth0" netns="" Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.176 [INFO][4744] k8s.go 615: Releasing IP address(es) ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.176 [INFO][4744] utils.go 188: Calico CNI releasing IP address ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.194 [INFO][4752] ipam_plugin.go 417: Releasing address using handleID ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.195 [INFO][4752] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.195 [INFO][4752] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.202 [WARNING][4752] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.202 [INFO][4752] ipam_plugin.go 445: Releasing address using workloadID ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" HandleID="k8s-pod-network.13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Workload="localhost-k8s-csi--node--driver--ggmcg-eth0" Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.203 [INFO][4752] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:31.207178 containerd[1439]: 2024-10-08 19:59:31.205 [INFO][4744] k8s.go 621: Teardown processing complete. ContainerID="13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2" Oct 8 19:59:31.208709 containerd[1439]: time="2024-10-08T19:59:31.207627625Z" level=info msg="TearDown network for sandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\" successfully" Oct 8 19:59:31.233566 containerd[1439]: time="2024-10-08T19:59:31.233432822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:59:31.233566 containerd[1439]: time="2024-10-08T19:59:31.233507785Z" level=info msg="RemovePodSandbox \"13b6ddaadef47d29b0c2e8aee812e0d282261de0e439780c6a582e7bdceb27d2\" returns successfully" Oct 8 19:59:31.234324 containerd[1439]: time="2024-10-08T19:59:31.234275528Z" level=info msg="StopPodSandbox for \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\"" Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.266 [WARNING][4774] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0", GenerateName:"calico-kube-controllers-6d6d468b64-", Namespace:"calico-system", SelfLink:"", UID:"ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6d468b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120", Pod:"calico-kube-controllers-6d6d468b64-wq8lg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb4ee56a745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.266 [INFO][4774] k8s.go 608: Cleaning up netns ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.266 [INFO][4774] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" iface="eth0" netns="" Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.266 [INFO][4774] k8s.go 615: Releasing IP address(es) ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.266 [INFO][4774] utils.go 188: Calico CNI releasing IP address ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.284 [INFO][4782] ipam_plugin.go 417: Releasing address using handleID ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.284 [INFO][4782] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.284 [INFO][4782] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.291 [WARNING][4782] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.291 [INFO][4782] ipam_plugin.go 445: Releasing address using workloadID ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.293 [INFO][4782] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:31.296521 containerd[1439]: 2024-10-08 19:59:31.295 [INFO][4774] k8s.go 621: Teardown processing complete. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:31.296521 containerd[1439]: time="2024-10-08T19:59:31.296433488Z" level=info msg="TearDown network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\" successfully" Oct 8 19:59:31.296521 containerd[1439]: time="2024-10-08T19:59:31.296455169Z" level=info msg="StopPodSandbox for \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\" returns successfully" Oct 8 19:59:31.297361 containerd[1439]: time="2024-10-08T19:59:31.297310395Z" level=info msg="RemovePodSandbox for \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\"" Oct 8 19:59:31.297361 containerd[1439]: time="2024-10-08T19:59:31.297337396Z" level=info msg="Forcibly stopping sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\"" Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.329 [WARNING][4804] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0", GenerateName:"calico-kube-controllers-6d6d468b64-", Namespace:"calico-system", SelfLink:"", UID:"ccb1d4f9-9ed3-446a-bf6d-201aa89f5d74", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6d468b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6382f06f94fc70f7e82ca8f32d60107d6b76e963ce6cf852b70d732fffe62120", Pod:"calico-kube-controllers-6d6d468b64-wq8lg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb4ee56a745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.329 [INFO][4804] k8s.go 608: Cleaning up netns ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.329 [INFO][4804] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" iface="eth0" netns="" Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.329 [INFO][4804] k8s.go 615: Releasing IP address(es) ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.329 [INFO][4804] utils.go 188: Calico CNI releasing IP address ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.346 [INFO][4812] ipam_plugin.go 417: Releasing address using handleID ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.346 [INFO][4812] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.346 [INFO][4812] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.355 [WARNING][4812] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.355 [INFO][4812] ipam_plugin.go 445: Releasing address using workloadID ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" HandleID="k8s-pod-network.250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Workload="localhost-k8s-calico--kube--controllers--6d6d468b64--wq8lg-eth0" Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.356 [INFO][4812] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:31.359810 containerd[1439]: 2024-10-08 19:59:31.358 [INFO][4804] k8s.go 621: Teardown processing complete. ContainerID="250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd" Oct 8 19:59:31.360228 containerd[1439]: time="2024-10-08T19:59:31.359852007Z" level=info msg="TearDown network for sandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\" successfully" Oct 8 19:59:31.362220 containerd[1439]: time="2024-10-08T19:59:31.362191239Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:59:31.362311 containerd[1439]: time="2024-10-08T19:59:31.362251641Z" level=info msg="RemovePodSandbox \"250f2203437d2304a2af85ef820da9a0b000057fac254b692a42d8c0b3dbddbd\" returns successfully" Oct 8 19:59:31.362948 containerd[1439]: time="2024-10-08T19:59:31.362680454Z" level=info msg="StopPodSandbox for \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\"" Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.393 [WARNING][4835] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--s4w46-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"73b8d60e-382d-4e38-addd-253451caecf4", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4", Pod:"coredns-6f6b679f8f-s4w46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96181c4f4fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.394 [INFO][4835] k8s.go 608: Cleaning up netns ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.394 [INFO][4835] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" iface="eth0" netns="" Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.394 [INFO][4835] k8s.go 615: Releasing IP address(es) ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.394 [INFO][4835] utils.go 188: Calico CNI releasing IP address ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.411 [INFO][4843] ipam_plugin.go 417: Releasing address using handleID ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.411 [INFO][4843] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.411 [INFO][4843] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.419 [WARNING][4843] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.419 [INFO][4843] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.420 [INFO][4843] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:31.424226 containerd[1439]: 2024-10-08 19:59:31.422 [INFO][4835] k8s.go 621: Teardown processing complete. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:31.424745 containerd[1439]: time="2024-10-08T19:59:31.424258316Z" level=info msg="TearDown network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\" successfully" Oct 8 19:59:31.424745 containerd[1439]: time="2024-10-08T19:59:31.424281917Z" level=info msg="StopPodSandbox for \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\" returns successfully" Oct 8 19:59:31.425571 containerd[1439]: time="2024-10-08T19:59:31.425217666Z" level=info msg="RemovePodSandbox for \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\"" Oct 8 19:59:31.425571 containerd[1439]: time="2024-10-08T19:59:31.425299548Z" level=info msg="Forcibly stopping sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\"" Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.456 [WARNING][4866] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--s4w46-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"73b8d60e-382d-4e38-addd-253451caecf4", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e2e2e6a094a92afafd1ddca7a0cee4543a91c2903554e19e1a556a0d7cfeac4", Pod:"coredns-6f6b679f8f-s4w46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96181c4f4fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.456 [INFO][4866] k8s.go 608: Cleaning up netns ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.456 [INFO][4866] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" iface="eth0" netns="" Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.456 [INFO][4866] k8s.go 615: Releasing IP address(es) ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.456 [INFO][4866] utils.go 188: Calico CNI releasing IP address ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.475 [INFO][4874] ipam_plugin.go 417: Releasing address using handleID ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.475 [INFO][4874] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.475 [INFO][4874] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.483 [WARNING][4874] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.483 [INFO][4874] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" HandleID="k8s-pod-network.40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Workload="localhost-k8s-coredns--6f6b679f8f--s4w46-eth0" Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.484 [INFO][4874] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:31.488078 containerd[1439]: 2024-10-08 19:59:31.486 [INFO][4866] k8s.go 621: Teardown processing complete. ContainerID="40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374" Oct 8 19:59:31.488481 containerd[1439]: time="2024-10-08T19:59:31.488117649Z" level=info msg="TearDown network for sandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\" successfully" Oct 8 19:59:31.490635 containerd[1439]: time="2024-10-08T19:59:31.490599725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:59:31.490702 containerd[1439]: time="2024-10-08T19:59:31.490663567Z" level=info msg="RemovePodSandbox \"40250e5d4746e3055d15aceffd5535ca755781752431f94d61786c890f58f374\" returns successfully" Oct 8 19:59:31.491400 containerd[1439]: time="2024-10-08T19:59:31.491119821Z" level=info msg="StopPodSandbox for \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\"" Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.522 [WARNING][4897] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--lccm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4a8304d9-5b66-4740-817a-39422665117d", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9", Pod:"coredns-6f6b679f8f-lccm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6af82b3a807", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.523 [INFO][4897] k8s.go 608: Cleaning up netns ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.523 [INFO][4897] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" iface="eth0" netns="" Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.523 [INFO][4897] k8s.go 615: Releasing IP address(es) ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.523 [INFO][4897] utils.go 188: Calico CNI releasing IP address ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.539 [INFO][4904] ipam_plugin.go 417: Releasing address using handleID ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.540 [INFO][4904] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.540 [INFO][4904] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.547 [WARNING][4904] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.547 [INFO][4904] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.548 [INFO][4904] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:31.551958 containerd[1439]: 2024-10-08 19:59:31.550 [INFO][4897] k8s.go 621: Teardown processing complete. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:31.552340 containerd[1439]: time="2024-10-08T19:59:31.551984941Z" level=info msg="TearDown network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\" successfully" Oct 8 19:59:31.552340 containerd[1439]: time="2024-10-08T19:59:31.552007902Z" level=info msg="StopPodSandbox for \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\" returns successfully" Oct 8 19:59:31.552494 containerd[1439]: time="2024-10-08T19:59:31.552405154Z" level=info msg="RemovePodSandbox for \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\"" Oct 8 19:59:31.552494 containerd[1439]: time="2024-10-08T19:59:31.552435875Z" level=info msg="Forcibly stopping sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\"" Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.589 [WARNING][4927] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--lccm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4a8304d9-5b66-4740-817a-39422665117d", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33e47995f9e9df8f7bd0958f4f4e29d19910531953f6522c18ac911723071cb9", Pod:"coredns-6f6b679f8f-lccm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6af82b3a807", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.589 [INFO][4927] k8s.go 608: Cleaning up netns ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.589 [INFO][4927] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" iface="eth0" netns="" Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.589 [INFO][4927] k8s.go 615: Releasing IP address(es) ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.589 [INFO][4927] utils.go 188: Calico CNI releasing IP address ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.607 [INFO][4934] ipam_plugin.go 417: Releasing address using handleID ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.607 [INFO][4934] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.607 [INFO][4934] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.615 [WARNING][4934] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.615 [INFO][4934] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" HandleID="k8s-pod-network.a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Workload="localhost-k8s-coredns--6f6b679f8f--lccm5-eth0" Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.617 [INFO][4934] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:59:31.620400 containerd[1439]: 2024-10-08 19:59:31.618 [INFO][4927] k8s.go 621: Teardown processing complete. ContainerID="a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe" Oct 8 19:59:31.620400 containerd[1439]: time="2024-10-08T19:59:31.620378814Z" level=info msg="TearDown network for sandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\" successfully" Oct 8 19:59:31.624039 containerd[1439]: time="2024-10-08T19:59:31.623999605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:59:31.624100 containerd[1439]: time="2024-10-08T19:59:31.624056727Z" level=info msg="RemovePodSandbox \"a9b1e279edefff23149aad4d2ba0edd2dea6c38a621a5cf2b33459b96f9721fe\" returns successfully" Oct 8 19:59:34.731234 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:50998.service - OpenSSH per-connection server daemon (10.0.0.1:50998). Oct 8 19:59:34.772137 sshd[4944]: Accepted publickey for core from 10.0.0.1 port 50998 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:34.773532 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:34.777774 systemd-logind[1426]: New session 19 of user core. Oct 8 19:59:34.791020 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:59:34.929685 sshd[4944]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:34.933140 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:50998.service: Deactivated successfully. Oct 8 19:59:34.934805 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:59:34.935583 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:59:34.936522 systemd-logind[1426]: Removed session 19. Oct 8 19:59:39.940405 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:51012.service - OpenSSH per-connection server daemon (10.0.0.1:51012). Oct 8 19:59:39.978404 sshd[4975]: Accepted publickey for core from 10.0.0.1 port 51012 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 19:59:39.979545 sshd[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:39.982883 systemd-logind[1426]: New session 20 of user core. Oct 8 19:59:39.995092 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:59:40.124395 sshd[4975]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:40.127472 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:51012.service: Deactivated successfully. Oct 8 19:59:40.129109 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:59:40.129782 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:59:40.130931 systemd-logind[1426]: Removed session 20.