May 15 12:38:59.878602 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 12:38:59.878625 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:38:59.878634 kernel: BIOS-provided physical RAM map: May 15 12:38:59.878643 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 15 12:38:59.878649 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 15 12:38:59.878655 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 12:38:59.878662 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 15 12:38:59.878669 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 15 12:38:59.878675 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 12:38:59.878681 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 12:38:59.878687 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 12:38:59.878693 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 12:38:59.878702 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 15 12:38:59.878709 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 12:38:59.878716 kernel: NX (Execute Disable) protection: active May 15 12:38:59.878723 kernel: APIC: Static calls initialized May 15 12:38:59.878730 kernel: SMBIOS 2.8 present. May 15 12:38:59.878739 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 15 12:38:59.878745 kernel: DMI: Memory slots populated: 1/1 May 15 12:38:59.878752 kernel: Hypervisor detected: KVM May 15 12:38:59.878759 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 12:38:59.878765 kernel: kvm-clock: using sched offset of 5801643970 cycles May 15 12:38:59.878772 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 12:38:59.878780 kernel: tsc: Detected 2000.000 MHz processor May 15 12:38:59.878787 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 12:38:59.878794 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 12:38:59.878801 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 15 12:38:59.878810 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 12:38:59.878817 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 12:38:59.878823 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 15 12:38:59.878830 kernel: Using GB pages for direct mapping May 15 12:38:59.878837 kernel: ACPI: Early table checksum verification disabled May 15 12:38:59.878844 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 15 12:38:59.878850 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:38:59.878857 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:38:59.878864 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:38:59.878873 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 12:38:59.878880 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:38:59.878887 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:38:59.878894 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:38:59.878905 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:38:59.878913 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 15 12:38:59.878922 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 15 12:38:59.878929 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 12:38:59.878937 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 15 12:38:59.878944 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 15 12:38:59.878951 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 15 12:38:59.878958 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 15 12:38:59.880013 kernel: No NUMA configuration found May 15 12:38:59.880024 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 15 12:38:59.880036 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] May 15 12:38:59.880043 kernel: Zone ranges: May 15 12:38:59.880051 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 12:38:59.880058 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 12:38:59.880065 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 15 12:38:59.880072 kernel: Device empty May 15 12:38:59.880079 kernel: Movable zone start for each node May 15 12:38:59.880086 kernel: Early memory node ranges May 15 12:38:59.880093 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 12:38:59.880100 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 15 12:38:59.880109 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 15 12:38:59.880116 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 15 12:38:59.880123 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 12:38:59.880130 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 12:38:59.880137 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 12:38:59.880144 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 12:38:59.880151 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 12:38:59.880158 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 12:38:59.880165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 12:38:59.880174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 12:38:59.880182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 12:38:59.880189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 12:38:59.880195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 12:38:59.880203 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 12:38:59.880209 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 12:38:59.880217 kernel: TSC deadline timer available May 15 12:38:59.880224 kernel: CPU topo: Max. logical packages: 1 May 15 12:38:59.880230 kernel: CPU topo: Max. logical dies: 1 May 15 12:38:59.880239 kernel: CPU topo: Max. dies per package: 1 May 15 12:38:59.880246 kernel: CPU topo: Max. threads per core: 1 May 15 12:38:59.880253 kernel: CPU topo: Num. cores per package: 2 May 15 12:38:59.880260 kernel: CPU topo: Num. threads per package: 2 May 15 12:38:59.880267 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 12:38:59.880274 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 12:38:59.880281 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 12:38:59.880288 kernel: kvm-guest: setup PV sched yield May 15 12:38:59.880295 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 12:38:59.880304 kernel: Booting paravirtualized kernel on KVM May 15 12:38:59.880311 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 12:38:59.880318 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 12:38:59.880325 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 12:38:59.880332 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 12:38:59.880339 kernel: pcpu-alloc: [0] 0 1 May 15 12:38:59.880346 kernel: kvm-guest: PV spinlocks enabled May 15 12:38:59.880353 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 12:38:59.880361 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:38:59.880370 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 12:38:59.880377 kernel: random: crng init done May 15 12:38:59.880384 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 12:38:59.880391 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 12:38:59.880398 kernel: Fallback order for Node 0: 0 May 15 12:38:59.880405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 15 12:38:59.880412 kernel: Policy zone: Normal May 15 12:38:59.880419 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 12:38:59.880428 kernel: software IO TLB: area num 2. May 15 12:38:59.880435 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 12:38:59.880442 kernel: ftrace: allocating 40065 entries in 157 pages May 15 12:38:59.880449 kernel: ftrace: allocated 157 pages with 5 groups May 15 12:38:59.880456 kernel: Dynamic Preempt: voluntary May 15 12:38:59.880463 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 12:38:59.880471 kernel: rcu: RCU event tracing is enabled. May 15 12:38:59.880478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 12:38:59.880485 kernel: Trampoline variant of Tasks RCU enabled. May 15 12:38:59.880493 kernel: Rude variant of Tasks RCU enabled. May 15 12:38:59.880502 kernel: Tracing variant of Tasks RCU enabled. May 15 12:38:59.880509 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 12:38:59.880516 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 12:38:59.880523 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:38:59.880536 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:38:59.880546 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:38:59.880553 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 12:38:59.880560 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 12:38:59.880568 kernel: Console: colour VGA+ 80x25 May 15 12:38:59.880575 kernel: printk: legacy console [tty0] enabled May 15 12:38:59.880582 kernel: printk: legacy console [ttyS0] enabled May 15 12:38:59.880590 kernel: ACPI: Core revision 20240827 May 15 12:38:59.880599 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 12:38:59.880607 kernel: APIC: Switch to symmetric I/O mode setup May 15 12:38:59.880614 kernel: x2apic enabled May 15 12:38:59.880622 kernel: APIC: Switched APIC routing to: physical x2apic May 15 12:38:59.880631 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 12:38:59.880654 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 12:38:59.880661 kernel: kvm-guest: setup PV IPIs May 15 12:38:59.880669 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 12:38:59.880676 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 15 12:38:59.880684 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 15 12:38:59.880691 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 12:38:59.880699 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 12:38:59.880707 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 12:38:59.880716 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 12:38:59.880724 kernel: Spectre V2 : Mitigation: Retpolines May 15 12:38:59.880731 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 12:38:59.880739 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 12:38:59.880746 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 12:38:59.880753 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 12:38:59.880761 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 12:38:59.880769 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 12:38:59.880777 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 12:38:59.880786 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 12:38:59.880794 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 12:38:59.880801 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 12:38:59.880809 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 12:38:59.880816 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 15 12:38:59.880824 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 12:38:59.880831 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 15 12:38:59.880839 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 15 12:38:59.880848 kernel: Freeing SMP alternatives memory: 32K May 15 12:38:59.880856 kernel: pid_max: default: 32768 minimum: 301 May 15 12:38:59.880863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 12:38:59.880871 kernel: landlock: Up and running. May 15 12:38:59.880878 kernel: SELinux: Initializing. May 15 12:38:59.880886 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:38:59.880893 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:38:59.880900 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 15 12:38:59.880908 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 12:38:59.880917 kernel: ... version: 0 May 15 12:38:59.880925 kernel: ... bit width: 48 May 15 12:38:59.880932 kernel: ... generic registers: 6 May 15 12:38:59.880940 kernel: ... value mask: 0000ffffffffffff May 15 12:38:59.880947 kernel: ... max period: 00007fffffffffff May 15 12:38:59.880954 kernel: ... fixed-purpose events: 0 May 15 12:38:59.880961 kernel: ... event mask: 000000000000003f May 15 12:38:59.880983 kernel: signal: max sigframe size: 3376 May 15 12:38:59.880991 kernel: rcu: Hierarchical SRCU implementation. May 15 12:38:59.880998 kernel: rcu: Max phase no-delay instances is 400. May 15 12:38:59.881008 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 12:38:59.881016 kernel: smp: Bringing up secondary CPUs ... May 15 12:38:59.881023 kernel: smpboot: x86: Booting SMP configuration: May 15 12:38:59.881030 kernel: .... node #0, CPUs: #1 May 15 12:38:59.881037 kernel: smp: Brought up 1 node, 2 CPUs May 15 12:38:59.881044 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 15 12:38:59.881051 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 227296K reserved, 0K cma-reserved) May 15 12:38:59.881059 kernel: devtmpfs: initialized May 15 12:38:59.881066 kernel: x86/mm: Memory block size: 128MB May 15 12:38:59.881075 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 12:38:59.881083 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 12:38:59.881090 kernel: pinctrl core: initialized pinctrl subsystem May 15 12:38:59.881097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 12:38:59.881104 kernel: audit: initializing netlink subsys (disabled) May 15 12:38:59.881111 kernel: audit: type=2000 audit(1747312737.046:1): state=initialized audit_enabled=0 res=1 May 15 12:38:59.881119 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 12:38:59.881126 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 12:38:59.881135 kernel: cpuidle: using governor menu May 15 12:38:59.881142 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 12:38:59.881150 kernel: dca service started, version 1.12.1 May 15 12:38:59.881157 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 15 12:38:59.881165 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 12:38:59.881172 kernel: PCI: Using configuration type 1 for base access May 15 12:38:59.881179 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 12:38:59.881187 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 12:38:59.881194 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 12:38:59.881203 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 12:38:59.881210 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 12:38:59.881217 kernel: ACPI: Added _OSI(Module Device) May 15 12:38:59.881224 kernel: ACPI: Added _OSI(Processor Device) May 15 12:38:59.881231 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 12:38:59.881239 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 12:38:59.881246 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 12:38:59.881253 kernel: ACPI: Interpreter enabled May 15 12:38:59.881260 kernel: ACPI: PM: (supports S0 S3 S5) May 15 12:38:59.881267 kernel: ACPI: Using IOAPIC for interrupt routing May 15 12:38:59.881276 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 12:38:59.881283 kernel: PCI: Using E820 reservations for host bridge windows May 15 12:38:59.881290 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 12:38:59.881298 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 12:38:59.881472 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 12:38:59.881586 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 12:38:59.881693 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 12:38:59.881707 kernel: PCI host bridge to bus 0000:00 May 15 12:38:59.881821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 12:38:59.881920 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 12:38:59.884138 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 12:38:59.884251 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 15 12:38:59.884349 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 12:38:59.884445 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 15 12:38:59.884549 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 12:38:59.884686 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 15 12:38:59.884816 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 15 12:38:59.884934 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 15 12:38:59.885380 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 15 12:38:59.885498 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 15 12:38:59.885610 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 12:38:59.885729 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 12:38:59.885837 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 15 12:38:59.885946 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 15 12:38:59.886088 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 15 12:38:59.886206 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 12:38:59.886314 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 15 12:38:59.886423 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 15 12:38:59.886527 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 15 12:38:59.886631 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 15 12:38:59.886749 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 15 12:38:59.886855 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 12:38:59.886983 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 15 12:38:59.887100 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 15 12:38:59.887204 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 15 12:38:59.887389 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 15 12:38:59.887519 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 15 12:38:59.887531 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 12:38:59.887543 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 12:38:59.887556 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 12:38:59.887568 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 12:38:59.887584 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 12:38:59.887595 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 12:38:59.887603 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 12:38:59.887610 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 12:38:59.887617 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 12:38:59.887625 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 12:38:59.887632 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 12:38:59.887640 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 12:38:59.887647 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 12:38:59.887657 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 12:38:59.887665 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 12:38:59.887672 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 12:38:59.887680 kernel: iommu: Default domain type: Translated May 15 12:38:59.887687 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 12:38:59.887695 kernel: PCI: Using ACPI for IRQ routing May 15 12:38:59.887702 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 12:38:59.887710 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 15 12:38:59.887718 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 15 12:38:59.887832 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 12:38:59.887937 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 12:38:59.888061 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 12:38:59.888072 kernel: vgaarb: loaded May 15 12:38:59.888080 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 12:38:59.888087 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 12:38:59.888095 kernel: clocksource: Switched to clocksource kvm-clock May 15 12:38:59.888102 kernel: VFS: Disk quotas dquot_6.6.0 May 15 12:38:59.888114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 12:38:59.888121 kernel: pnp: PnP ACPI init May 15 12:38:59.888244 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 12:38:59.888255 kernel: pnp: PnP ACPI: found 5 devices May 15 12:38:59.888263 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 12:38:59.888271 kernel: NET: Registered PF_INET protocol family May 15 12:38:59.888278 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 12:38:59.888286 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 12:38:59.888297 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 12:38:59.888304 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 12:38:59.888311 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 12:38:59.888319 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 12:38:59.888326 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:38:59.888334 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:38:59.888341 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 12:38:59.888348 kernel: NET: Registered PF_XDP protocol family May 15 12:38:59.888446 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 12:38:59.888546 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 12:38:59.888642 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 12:38:59.888737 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 15 12:38:59.888832 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 12:38:59.888942 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 15 12:38:59.888959 kernel: PCI: CLS 0 bytes, default 64 May 15 12:38:59.888985 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 12:38:59.888996 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 15 12:38:59.889006 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 15 12:38:59.889021 kernel: Initialise system trusted keyrings May 15 12:38:59.889032 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 12:38:59.889043 kernel: Key type asymmetric registered May 15 12:38:59.889054 kernel: Asymmetric key parser 'x509' registered May 15 12:38:59.889065 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 12:38:59.889077 kernel: io scheduler mq-deadline registered May 15 12:38:59.889086 kernel: io scheduler kyber registered May 15 12:38:59.889093 kernel: io scheduler bfq registered May 15 12:38:59.889101 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 12:38:59.889112 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 12:38:59.889120 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 12:38:59.889127 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 12:38:59.889134 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 12:38:59.889142 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 12:38:59.889149 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 12:38:59.889156 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 12:38:59.889279 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 12:38:59.889293 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 12:38:59.889393 kernel: rtc_cmos 00:03: registered as rtc0 May 15 12:38:59.889492 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T12:38:59 UTC (1747312739) May 15 12:38:59.890779 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 12:38:59.890794 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 12:38:59.890803 kernel: NET: Registered PF_INET6 protocol family May 15 12:38:59.890810 kernel: Segment Routing with IPv6 May 15 12:38:59.890817 kernel: In-situ OAM (IOAM) with IPv6 May 15 12:38:59.890828 kernel: NET: Registered PF_PACKET protocol family May 15 12:38:59.890836 kernel: Key type dns_resolver registered May 15 12:38:59.890843 kernel: IPI shorthand broadcast: enabled May 15 12:38:59.890850 kernel: sched_clock: Marking stable (2771003810, 214380810)->(3020736470, -35351850) May 15 12:38:59.890857 kernel: registered taskstats version 1 May 15 12:38:59.890864 kernel: Loading compiled-in X.509 certificates May 15 12:38:59.890871 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 12:38:59.890878 kernel: Demotion targets for Node 0: null May 15 12:38:59.890885 kernel: Key type .fscrypt registered May 15 12:38:59.890892 kernel: Key type fscrypt-provisioning registered May 15 12:38:59.890902 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 12:38:59.890909 kernel: ima: Allocated hash algorithm: sha1 May 15 12:38:59.890917 kernel: ima: No architecture policies found May 15 12:38:59.890924 kernel: clk: Disabling unused clocks May 15 12:38:59.890931 kernel: Warning: unable to open an initial console. May 15 12:38:59.890939 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 12:38:59.890946 kernel: Write protecting the kernel read-only data: 24576k May 15 12:38:59.890954 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 12:38:59.890963 kernel: Run /init as init process May 15 12:38:59.890999 kernel: with arguments: May 15 12:38:59.891007 kernel: /init May 15 12:38:59.891015 kernel: with environment: May 15 12:38:59.891022 kernel: HOME=/ May 15 12:38:59.891045 kernel: TERM=linux May 15 12:38:59.891055 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 12:38:59.891064 systemd[1]: Successfully made /usr/ read-only. May 15 12:38:59.891076 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:38:59.891087 systemd[1]: Detected virtualization kvm. May 15 12:38:59.891095 systemd[1]: Detected architecture x86-64. May 15 12:38:59.891103 systemd[1]: Running in initrd. May 15 12:38:59.891110 systemd[1]: No hostname configured, using default hostname. May 15 12:38:59.891118 systemd[1]: Hostname set to . May 15 12:38:59.891126 systemd[1]: Initializing machine ID from random generator. May 15 12:38:59.891134 systemd[1]: Queued start job for default target initrd.target. May 15 12:38:59.891145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:38:59.891153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:38:59.891162 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 12:38:59.891170 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:38:59.891178 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 12:38:59.891187 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 12:38:59.891196 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 12:38:59.891206 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 12:38:59.891214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:38:59.891222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:38:59.891231 systemd[1]: Reached target paths.target - Path Units. May 15 12:38:59.891239 systemd[1]: Reached target slices.target - Slice Units. May 15 12:38:59.891247 systemd[1]: Reached target swap.target - Swaps. May 15 12:38:59.891256 systemd[1]: Reached target timers.target - Timer Units. May 15 12:38:59.891265 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:38:59.891276 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:38:59.891284 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 12:38:59.891292 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 12:38:59.891300 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:38:59.891308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:38:59.891316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:38:59.891328 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:38:59.891336 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 12:38:59.891345 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:38:59.891353 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 12:38:59.891361 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 12:38:59.891370 systemd[1]: Starting systemd-fsck-usr.service... May 15 12:38:59.891378 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:38:59.891386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:38:59.891396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:38:59.891404 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 12:38:59.891443 systemd-journald[206]: Collecting audit messages is disabled. May 15 12:38:59.891467 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:38:59.891476 systemd[1]: Finished systemd-fsck-usr.service. May 15 12:38:59.891485 systemd-journald[206]: Journal started May 15 12:38:59.891506 systemd-journald[206]: Runtime Journal (/run/log/journal/6e107da1e3014d84a662053eeecb5a24) is 8M, max 78.5M, 70.5M free. May 15 12:38:59.869845 systemd-modules-load[207]: Inserted module 'overlay' May 15 12:38:59.900994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:38:59.913987 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:38:59.919992 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 12:38:59.920408 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:38:59.989251 kernel: Bridge firewalling registered May 15 12:38:59.924997 systemd-modules-load[207]: Inserted module 'br_netfilter' May 15 12:38:59.990038 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:38:59.996122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:38:59.998486 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:39:00.002687 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 12:39:00.003127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 12:39:00.007081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:39:00.011119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:39:00.014872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:39:00.026065 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:39:00.028132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:39:00.032128 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:39:00.034296 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:39:00.039164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 12:39:00.064070 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:39:00.088962 systemd-resolved[241]: Positive Trust Anchors: May 15 12:39:00.089726 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:39:00.089754 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:39:00.093596 systemd-resolved[241]: Defaulting to hostname 'linux'. May 15 12:39:00.094833 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:39:00.096952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:39:00.175007 kernel: SCSI subsystem initialized May 15 12:39:00.189003 kernel: Loading iSCSI transport class v2.0-870. May 15 12:39:00.200014 kernel: iscsi: registered transport (tcp) May 15 12:39:00.220452 kernel: iscsi: registered transport (qla4xxx) May 15 12:39:00.220501 kernel: QLogic iSCSI HBA Driver May 15 12:39:00.243117 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:39:00.259276 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:39:00.262225 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:39:00.319183 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 12:39:00.321533 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 12:39:00.376004 kernel: raid6: avx2x4 gen() 29028 MB/s May 15 12:39:00.394012 kernel: raid6: avx2x2 gen() 31327 MB/s May 15 12:39:00.412390 kernel: raid6: avx2x1 gen() 18834 MB/s May 15 12:39:00.412487 kernel: raid6: using algorithm avx2x2 gen() 31327 MB/s May 15 12:39:00.431412 kernel: raid6: .... xor() 28750 MB/s, rmw enabled May 15 12:39:00.431530 kernel: raid6: using avx2x2 recovery algorithm May 15 12:39:00.450005 kernel: xor: automatically using best checksumming function avx May 15 12:39:00.587011 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 12:39:00.594989 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 12:39:00.597847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:39:00.623640 systemd-udevd[455]: Using default interface naming scheme 'v255'. May 15 12:39:00.628630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:39:00.631483 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 12:39:00.650931 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation May 15 12:39:00.678461 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:39:00.680326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:39:00.753884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:39:00.758254 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 12:39:00.820990 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 15 12:39:00.833239 kernel: scsi host0: Virtio SCSI HBA May 15 12:39:00.835994 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 15 12:39:00.932062 kernel: cryptd: max_cpu_qlen set to 1000 May 15 12:39:00.968020 kernel: libata version 3.00 loaded. May 15 12:39:00.968596 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:39:00.968779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:39:00.981805 kernel: sd 0:0:0:0: Power-on or device reset occurred May 15 12:39:00.998157 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 15 12:39:00.998322 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 12:39:00.998458 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 15 12:39:00.998588 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 15 12:39:00.998714 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 12:39:00.998726 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 12:39:00.998736 kernel: GPT:9289727 != 167739391 May 15 12:39:00.998746 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 12:39:00.998758 kernel: GPT:9289727 != 167739391 May 15 12:39:00.998767 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 12:39:00.998777 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:39:00.998786 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 12:39:00.992085 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:39:00.993601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:39:00.994460 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:39:01.011998 kernel: AES CTR mode by8 optimization enabled May 15 12:39:01.019068 kernel: ahci 0000:00:1f.2: version 3.0 May 15 12:39:01.068891 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 12:39:01.068910 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 15 12:39:01.069552 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 15 12:39:01.069731 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 12:39:01.069862 kernel: scsi host1: ahci May 15 12:39:01.070409 kernel: scsi host2: ahci May 15 12:39:01.070546 kernel: scsi host3: ahci May 15 12:39:01.070723 kernel: scsi host4: ahci May 15 12:39:01.070855 kernel: scsi host5: ahci May 15 12:39:01.071472 kernel: scsi host6: ahci May 15 12:39:01.071623 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 May 15 12:39:01.071642 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 May 15 12:39:01.071655 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 May 15 12:39:01.071664 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 May 15 12:39:01.071679 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 May 15 12:39:01.071688 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 May 15 12:39:01.105637 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 15 12:39:01.175711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:39:01.195438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 12:39:01.202172 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 15 12:39:01.202799 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 15 12:39:01.211775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 15 12:39:01.214119 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 12:39:01.229299 disk-uuid[624]: Primary Header is updated. May 15 12:39:01.229299 disk-uuid[624]: Secondary Entries is updated. May 15 12:39:01.229299 disk-uuid[624]: Secondary Header is updated. May 15 12:39:01.251358 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:39:01.265016 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:39:01.379117 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 12:39:01.379174 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 12:39:01.379187 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 12:39:01.387802 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 12:39:01.387839 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 15 12:39:01.391274 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 12:39:01.422392 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 12:39:01.444365 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:39:01.445941 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:39:01.446578 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:39:01.448955 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 12:39:01.470605 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 12:39:02.267589 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:39:02.267674 disk-uuid[625]: The operation has completed successfully. May 15 12:39:02.327036 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 12:39:02.327159 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 12:39:02.348748 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 12:39:02.367359 sh[653]: Success May 15 12:39:02.385316 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 12:39:02.385371 kernel: device-mapper: uevent: version 1.0.3 May 15 12:39:02.388378 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 12:39:02.398069 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 15 12:39:02.446377 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 12:39:02.451044 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 12:39:02.465325 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 12:39:02.475990 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 12:39:02.476055 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (665) May 15 12:39:02.483955 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 12:39:02.483995 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 12:39:02.484007 kernel: BTRFS info (device dm-0): using free-space-tree May 15 12:39:02.492032 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 12:39:02.493041 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 12:39:02.493710 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 12:39:02.494643 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 12:39:02.498021 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 12:39:02.524489 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (700) May 15 12:39:02.524539 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:39:02.528202 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:39:02.528240 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:39:02.544039 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:39:02.544375 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 12:39:02.547178 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 12:39:02.613152 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:39:02.616584 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:39:02.675922 systemd-networkd[835]: lo: Link UP May 15 12:39:02.675936 systemd-networkd[835]: lo: Gained carrier May 15 12:39:02.677757 systemd-networkd[835]: Enumeration completed May 15 12:39:02.677841 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:39:02.678517 systemd[1]: Reached target network.target - Network. May 15 12:39:02.679393 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:39:02.679397 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:39:02.681275 systemd-networkd[835]: eth0: Link UP May 15 12:39:02.682847 ignition[763]: Ignition 2.21.0 May 15 12:39:02.681279 systemd-networkd[835]: eth0: Gained carrier May 15 12:39:02.682854 ignition[763]: Stage: fetch-offline May 15 12:39:02.681287 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:39:02.682881 ignition[763]: no configs at "/usr/lib/ignition/base.d" May 15 12:39:02.685749 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:39:02.682890 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:39:02.683019 ignition[763]: parsed url from cmdline: "" May 15 12:39:02.689085 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 12:39:02.683025 ignition[763]: no config URL provided May 15 12:39:02.683031 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:39:02.683040 ignition[763]: no config at "/usr/lib/ignition/user.ign" May 15 12:39:02.683045 ignition[763]: failed to fetch config: resource requires networking May 15 12:39:02.683420 ignition[763]: Ignition finished successfully May 15 12:39:02.713377 ignition[843]: Ignition 2.21.0 May 15 12:39:02.713390 ignition[843]: Stage: fetch May 15 12:39:02.713547 ignition[843]: no configs at "/usr/lib/ignition/base.d" May 15 12:39:02.713559 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:39:02.713661 ignition[843]: parsed url from cmdline: "" May 15 12:39:02.713665 ignition[843]: no config URL provided May 15 12:39:02.713670 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:39:02.713700 ignition[843]: no config at "/usr/lib/ignition/user.ign" May 15 12:39:02.713737 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 May 15 12:39:02.713946 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:39:02.914681 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 May 15 12:39:02.915254 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:39:03.315364 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 May 15 12:39:03.315513 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:39:03.725038 systemd-networkd[835]: eth0: DHCPv4 address 172.236.125.189/24, gateway 172.236.125.1 acquired from 23.215.118.129 May 15 12:39:03.990197 systemd-networkd[835]: eth0: Gained IPv6LL May 15 12:39:04.116072 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #4 May 15 12:39:04.207497 ignition[843]: PUT result: OK May 15 12:39:04.207559 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 May 15 12:39:04.316340 ignition[843]: GET result: OK May 15 12:39:04.316479 ignition[843]: parsing config with SHA512: 350c63379311d83ffcb9c580de47769e850890e7bd731faf90a47178e8d7a76d54d26c01629d390699ab1b406519224a09e59bdf26ffa4481871215bef869c52 May 15 12:39:04.320035 unknown[843]: fetched base config from "system" May 15 12:39:04.320050 unknown[843]: fetched base config from "system" May 15 12:39:04.320365 ignition[843]: fetch: fetch complete May 15 12:39:04.320059 unknown[843]: fetched user config from "akamai" May 15 12:39:04.320372 ignition[843]: fetch: fetch passed May 15 12:39:04.323955 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 12:39:04.320415 ignition[843]: Ignition finished successfully May 15 12:39:04.347345 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 12:39:04.376888 ignition[851]: Ignition 2.21.0 May 15 12:39:04.376904 ignition[851]: Stage: kargs May 15 12:39:04.377053 ignition[851]: no configs at "/usr/lib/ignition/base.d" May 15 12:39:04.377064 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:39:04.377596 ignition[851]: kargs: kargs passed May 15 12:39:04.377635 ignition[851]: Ignition finished successfully May 15 12:39:04.380154 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 12:39:04.382189 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 12:39:04.404452 ignition[858]: Ignition 2.21.0 May 15 12:39:04.404464 ignition[858]: Stage: disks May 15 12:39:04.404574 ignition[858]: no configs at "/usr/lib/ignition/base.d" May 15 12:39:04.404584 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:39:04.406541 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 12:39:04.405135 ignition[858]: disks: disks passed May 15 12:39:04.408006 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 12:39:04.405174 ignition[858]: Ignition finished successfully May 15 12:39:04.408765 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 12:39:04.409776 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:39:04.410955 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:39:04.411947 systemd[1]: Reached target basic.target - Basic System. May 15 12:39:04.413935 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 12:39:04.449297 systemd-fsck[867]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 12:39:04.453052 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 12:39:04.457038 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 12:39:04.561983 kernel: EXT4-fs (sda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 12:39:04.562635 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 12:39:04.563716 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 12:39:04.565697 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:39:04.568031 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 12:39:04.569303 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 12:39:04.569343 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 12:39:04.569366 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:39:04.584198 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 12:39:04.585706 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 12:39:04.595105 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (875) May 15 12:39:04.601198 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:39:04.601225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:39:04.601237 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:39:04.610803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:39:04.640805 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory May 15 12:39:04.644991 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory May 15 12:39:04.650104 initrd-setup-root[913]: cut: /sysroot/etc/shadow: No such file or directory May 15 12:39:04.654350 initrd-setup-root[920]: cut: /sysroot/etc/gshadow: No such file or directory May 15 12:39:04.742749 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 12:39:04.745344 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 12:39:04.747053 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 12:39:04.761425 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 12:39:04.764004 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:39:04.779248 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 12:39:04.786825 ignition[989]: INFO : Ignition 2.21.0 May 15 12:39:04.786825 ignition[989]: INFO : Stage: mount May 15 12:39:04.788042 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:39:04.788042 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:39:04.789477 ignition[989]: INFO : mount: mount passed May 15 12:39:04.791282 ignition[989]: INFO : Ignition finished successfully May 15 12:39:04.791345 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 12:39:04.794275 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 12:39:05.564279 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:39:05.593001 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (1001) May 15 12:39:05.597114 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:39:05.597167 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:39:05.597187 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:39:05.603172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:39:05.638108 ignition[1018]: INFO : Ignition 2.21.0 May 15 12:39:05.638108 ignition[1018]: INFO : Stage: files May 15 12:39:05.639454 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:39:05.639454 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:39:05.641031 ignition[1018]: DEBUG : files: compiled without relabeling support, skipping May 15 12:39:05.642455 ignition[1018]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 12:39:05.642455 ignition[1018]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 12:39:05.644698 ignition[1018]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 12:39:05.645657 ignition[1018]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 12:39:05.645657 ignition[1018]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 12:39:05.645178 unknown[1018]: wrote ssh authorized keys file for user: core May 15 12:39:05.647980 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 12:39:05.647980 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 12:39:05.931901 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 12:39:06.197168 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 12:39:06.197168 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:39:06.199705 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:39:06.207845 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 12:39:06.207845 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 12:39:06.207845 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 12:39:06.207845 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 12:39:06.538897 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 12:39:06.728181 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 12:39:06.728181 ignition[1018]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 12:39:06.730767 ignition[1018]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:39:06.732934 ignition[1018]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:39:06.732934 ignition[1018]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 12:39:06.732934 ignition[1018]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 12:39:06.735759 ignition[1018]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 12:39:06.735759 ignition[1018]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 12:39:06.735759 ignition[1018]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 12:39:06.735759 ignition[1018]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 15 12:39:06.735759 ignition[1018]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 15 12:39:06.735759 ignition[1018]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 12:39:06.735759 ignition[1018]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 12:39:06.735759 ignition[1018]: INFO : files: files passed May 15 12:39:06.735759 ignition[1018]: INFO : Ignition finished successfully May 15 12:39:06.735619 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 12:39:06.739094 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 12:39:06.743071 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 12:39:06.754502 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 12:39:06.755251 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 12:39:06.761440 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:39:06.761440 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 12:39:06.764463 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:39:06.766204 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:39:06.767859 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 12:39:06.769282 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 12:39:06.827003 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 12:39:06.827135 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 12:39:06.828773 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 12:39:06.829747 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 12:39:06.831046 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 12:39:06.831843 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 12:39:06.871698 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:39:06.873806 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 12:39:06.891516 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 12:39:06.892284 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:39:06.893555 systemd[1]: Stopped target timers.target - Timer Units. May 15 12:39:06.894786 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 12:39:06.894929 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:39:06.896236 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 12:39:06.897140 systemd[1]: Stopped target basic.target - Basic System. May 15 12:39:06.898351 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 12:39:06.899432 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:39:06.900645 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 12:39:06.901847 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 12:39:06.903266 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 12:39:06.904469 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:39:06.905747 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 12:39:06.906953 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 12:39:06.908203 systemd[1]: Stopped target swap.target - Swaps. May 15 12:39:06.909372 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 12:39:06.909517 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 12:39:06.910835 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 12:39:06.911626 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:39:06.912733 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 12:39:06.912857 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:39:06.914110 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 12:39:06.914211 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 12:39:06.915844 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 12:39:06.916009 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:39:06.917304 systemd[1]: ignition-files.service: Deactivated successfully. May 15 12:39:06.917436 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 12:39:06.919355 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 12:39:06.924432 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 12:39:06.926337 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 12:39:06.926474 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:39:06.927987 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 12:39:06.928099 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:39:06.937999 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 12:39:06.938117 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 12:39:06.951144 ignition[1071]: INFO : Ignition 2.21.0 May 15 12:39:06.952502 ignition[1071]: INFO : Stage: umount May 15 12:39:06.952502 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:39:06.952502 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:39:06.956301 ignition[1071]: INFO : umount: umount passed May 15 12:39:06.956301 ignition[1071]: INFO : Ignition finished successfully May 15 12:39:06.955759 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 12:39:06.962387 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 12:39:06.962489 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 12:39:06.979933 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 12:39:06.980049 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 12:39:06.981449 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 12:39:06.981528 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 12:39:06.982811 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 12:39:06.982858 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 12:39:06.983881 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 12:39:06.983926 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 12:39:06.984919 systemd[1]: Stopped target network.target - Network. May 15 12:39:06.985986 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 12:39:06.986037 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:39:06.987245 systemd[1]: Stopped target paths.target - Path Units. May 15 12:39:06.988265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 12:39:06.988519 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:39:06.989386 systemd[1]: Stopped target slices.target - Slice Units. May 15 12:39:06.990508 systemd[1]: Stopped target sockets.target - Socket Units. May 15 12:39:06.991627 systemd[1]: iscsid.socket: Deactivated successfully. May 15 12:39:06.991669 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:39:06.992679 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 12:39:06.992721 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:39:06.993853 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 12:39:06.993903 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 12:39:06.995104 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 12:39:06.995149 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 12:39:06.996157 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 12:39:06.996204 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 12:39:06.997497 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 12:39:06.998802 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 12:39:07.007067 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 12:39:07.007203 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 12:39:07.011083 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 12:39:07.011337 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 12:39:07.011491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 12:39:07.013930 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 12:39:07.014496 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 12:39:07.015403 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 12:39:07.015442 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 12:39:07.017422 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 12:39:07.019344 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 12:39:07.019398 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:39:07.020574 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:39:07.020672 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:39:07.023099 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 12:39:07.023149 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 12:39:07.023856 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 12:39:07.023906 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:39:07.025576 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:39:07.028201 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 12:39:07.028264 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 12:39:07.049031 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 12:39:07.049201 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:39:07.050076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 12:39:07.050119 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 12:39:07.050942 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 12:39:07.051001 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:39:07.051557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 12:39:07.051604 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 12:39:07.053380 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 12:39:07.053426 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 12:39:07.054710 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 12:39:07.054761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:39:07.058070 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 12:39:07.062042 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 12:39:07.062119 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:39:07.063525 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 12:39:07.063576 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:39:07.064831 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 12:39:07.064876 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:39:07.068079 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 12:39:07.068127 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:39:07.069447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:39:07.069498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:39:07.073102 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 12:39:07.073155 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 15 12:39:07.073197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 12:39:07.073245 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:39:07.073649 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 12:39:07.073748 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 12:39:07.074911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 12:39:07.075039 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 12:39:07.076721 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 12:39:07.078727 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 12:39:07.095692 systemd[1]: Switching root. May 15 12:39:07.126756 systemd-journald[206]: Journal stopped May 15 12:39:08.197544 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). May 15 12:39:08.197570 kernel: SELinux: policy capability network_peer_controls=1 May 15 12:39:08.197582 kernel: SELinux: policy capability open_perms=1 May 15 12:39:08.197594 kernel: SELinux: policy capability extended_socket_class=1 May 15 12:39:08.197603 kernel: SELinux: policy capability always_check_network=0 May 15 12:39:08.197612 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 12:39:08.197621 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 12:39:08.197630 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 12:39:08.197639 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 12:39:08.197648 kernel: SELinux: policy capability userspace_initial_context=0 May 15 12:39:08.197659 kernel: audit: type=1403 audit(1747312747.278:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 12:39:08.197669 systemd[1]: Successfully loaded SELinux policy in 74.685ms. May 15 12:39:08.197680 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.279ms. May 15 12:39:08.197691 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:39:08.197702 systemd[1]: Detected virtualization kvm. May 15 12:39:08.197714 systemd[1]: Detected architecture x86-64. May 15 12:39:08.197723 systemd[1]: Detected first boot. May 15 12:39:08.197734 systemd[1]: Initializing machine ID from random generator. May 15 12:39:08.197744 zram_generator::config[1117]: No configuration found. May 15 12:39:08.197754 kernel: Guest personality initialized and is inactive May 15 12:39:08.197763 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 12:39:08.197773 kernel: Initialized host personality May 15 12:39:08.197784 kernel: NET: Registered PF_VSOCK protocol family May 15 12:39:08.197793 systemd[1]: Populated /etc with preset unit settings. May 15 12:39:08.197805 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 12:39:08.197815 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 12:39:08.197825 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 12:39:08.197835 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 12:39:08.197845 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 12:39:08.197857 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 12:39:08.197868 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 12:39:08.197878 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 12:39:08.197888 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 12:39:08.197898 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 12:39:08.197908 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 12:39:08.197918 systemd[1]: Created slice user.slice - User and Session Slice. May 15 12:39:08.197930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:39:08.197941 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:39:08.197951 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 12:39:08.197961 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 12:39:08.201947 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 12:39:08.201964 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:39:08.201990 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 12:39:08.202001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:39:08.202014 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:39:08.202024 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 12:39:08.202034 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 12:39:08.202045 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 12:39:08.202082 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 12:39:08.202093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:39:08.202103 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:39:08.202113 systemd[1]: Reached target slices.target - Slice Units. May 15 12:39:08.202126 systemd[1]: Reached target swap.target - Swaps. May 15 12:39:08.202136 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 12:39:08.202146 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 12:39:08.202156 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 12:39:08.202167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:39:08.202179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:39:08.202189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:39:08.202199 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 12:39:08.202209 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 12:39:08.202219 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 12:39:08.202229 systemd[1]: Mounting media.mount - External Media Directory... May 15 12:39:08.202239 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:39:08.202249 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 12:39:08.202262 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 12:39:08.202272 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 12:39:08.202282 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 12:39:08.202292 systemd[1]: Reached target machines.target - Containers. May 15 12:39:08.202303 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 12:39:08.202313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:39:08.202324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:39:08.202333 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 12:39:08.202345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:39:08.202356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:39:08.202366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:39:08.202376 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 12:39:08.202386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:39:08.202396 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 12:39:08.202406 kernel: fuse: init (API version 7.41) May 15 12:39:08.202416 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 12:39:08.202426 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 12:39:08.202438 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 12:39:08.202449 systemd[1]: Stopped systemd-fsck-usr.service. May 15 12:39:08.202460 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:39:08.202470 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:39:08.202480 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:39:08.202490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:39:08.202500 kernel: loop: module loaded May 15 12:39:08.202509 kernel: ACPI: bus type drm_connector registered May 15 12:39:08.202521 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 12:39:08.202531 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 12:39:08.202541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:39:08.202551 systemd[1]: verity-setup.service: Deactivated successfully. May 15 12:39:08.202561 systemd[1]: Stopped verity-setup.service. May 15 12:39:08.202571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:39:08.202581 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 12:39:08.202591 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 12:39:08.202626 systemd-journald[1196]: Collecting audit messages is disabled. May 15 12:39:08.202649 systemd[1]: Mounted media.mount - External Media Directory. May 15 12:39:08.202659 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 12:39:08.202670 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 12:39:08.202680 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 12:39:08.202692 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 12:39:08.202702 systemd-journald[1196]: Journal started May 15 12:39:08.202722 systemd-journald[1196]: Runtime Journal (/run/log/journal/88462e4139ff428b8130f9c4499faabd) is 8M, max 78.5M, 70.5M free. May 15 12:39:07.875484 systemd[1]: Queued start job for default target multi-user.target. May 15 12:39:07.888707 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 12:39:07.889231 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 12:39:08.205092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:39:08.208702 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:39:08.208517 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 12:39:08.208816 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 12:39:08.209787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:39:08.210185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:39:08.211190 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:39:08.211469 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:39:08.212363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:39:08.212550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:39:08.213547 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 12:39:08.213817 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 12:39:08.214725 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:39:08.215262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:39:08.216192 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:39:08.217164 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:39:08.218139 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 12:39:08.219077 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 12:39:08.232895 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:39:08.237044 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 12:39:08.238714 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 12:39:08.240361 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 12:39:08.240392 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:39:08.242618 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 12:39:08.248084 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 12:39:08.250418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:39:08.252806 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 12:39:08.255702 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 12:39:08.258047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:39:08.262744 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 12:39:08.264056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:39:08.266076 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:39:08.270366 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 12:39:08.272370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:39:08.276617 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 12:39:08.279108 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 12:39:08.285481 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 12:39:08.291113 systemd-journald[1196]: Time spent on flushing to /var/log/journal/88462e4139ff428b8130f9c4499faabd is 61.251ms for 1004 entries. May 15 12:39:08.291113 systemd-journald[1196]: System Journal (/var/log/journal/88462e4139ff428b8130f9c4499faabd) is 8M, max 195.6M, 187.6M free. May 15 12:39:08.369445 systemd-journald[1196]: Received client request to flush runtime journal. May 15 12:39:08.369488 kernel: loop0: detected capacity change from 0 to 8 May 15 12:39:08.369502 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 12:39:08.369518 kernel: loop1: detected capacity change from 0 to 146240 May 15 12:39:08.290341 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 12:39:08.297146 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 12:39:08.338278 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 12:39:08.345792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:39:08.366007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:39:08.372274 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 12:39:08.375588 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 15 12:39:08.375605 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 15 12:39:08.384854 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:39:08.388094 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 12:39:08.394066 kernel: loop2: detected capacity change from 0 to 113872 May 15 12:39:08.429992 kernel: loop3: detected capacity change from 0 to 210664 May 15 12:39:08.437642 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 12:39:08.443152 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:39:08.477989 kernel: loop4: detected capacity change from 0 to 8 May 15 12:39:08.480570 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 15 12:39:08.480610 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 15 12:39:08.485993 kernel: loop5: detected capacity change from 0 to 146240 May 15 12:39:08.486212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:39:08.505992 kernel: loop6: detected capacity change from 0 to 113872 May 15 12:39:08.520999 kernel: loop7: detected capacity change from 0 to 210664 May 15 12:39:08.541771 (sd-merge)[1266]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 15 12:39:08.542676 (sd-merge)[1266]: Merged extensions into '/usr'. May 15 12:39:08.546651 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... May 15 12:39:08.546738 systemd[1]: Reloading... May 15 12:39:08.635997 zram_generator::config[1293]: No configuration found. May 15 12:39:08.749532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:39:08.840048 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 12:39:08.840355 systemd[1]: Reloading finished in 293 ms. May 15 12:39:08.846943 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 12:39:08.854027 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 12:39:08.856881 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 12:39:08.867101 systemd[1]: Starting ensure-sysext.service... May 15 12:39:08.873108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:39:08.897229 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 15 12:39:08.897382 systemd[1]: Reloading... May 15 12:39:08.918810 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 12:39:08.919326 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 12:39:08.919611 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 12:39:08.919833 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 12:39:08.922240 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 12:39:08.922525 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 15 12:39:08.922697 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 15 12:39:08.929030 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:39:08.930066 systemd-tmpfiles[1338]: Skipping /boot May 15 12:39:08.946880 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:39:08.947458 systemd-tmpfiles[1338]: Skipping /boot May 15 12:39:08.983015 zram_generator::config[1365]: No configuration found. May 15 12:39:09.069371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:39:09.138803 systemd[1]: Reloading finished in 241 ms. May 15 12:39:09.153237 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 12:39:09.165729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:39:09.173625 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:39:09.177172 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 12:39:09.182144 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 12:39:09.188410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:39:09.192784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:39:09.195348 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 12:39:09.198903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:39:09.201172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:39:09.202464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:39:09.209213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:39:09.211484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:39:09.213109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:39:09.213253 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:39:09.213346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:39:09.220518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:39:09.220751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:39:09.221016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:39:09.221198 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:39:09.225257 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 12:39:09.226007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:39:09.231504 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:39:09.232123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:39:09.241359 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:39:09.243125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:39:09.243222 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:39:09.243359 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:39:09.244279 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 12:39:09.245475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:39:09.245680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:39:09.252276 systemd[1]: Finished ensure-sysext.service. May 15 12:39:09.267707 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 12:39:09.273101 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 12:39:09.277422 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 12:39:09.279425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:39:09.279641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:39:09.280510 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:39:09.281020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:39:09.282328 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:39:09.282514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:39:09.288157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:39:09.288223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:39:09.290263 systemd-udevd[1418]: Using default interface naming scheme 'v255'. May 15 12:39:09.301265 augenrules[1449]: No rules May 15 12:39:09.302897 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:39:09.303168 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:39:09.314916 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 12:39:09.322853 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 12:39:09.324116 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:39:09.335762 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 12:39:09.337290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:39:09.348726 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:39:09.439084 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 12:39:09.504995 kernel: mousedev: PS/2 mouse device common for all mice May 15 12:39:09.548702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 12:39:09.551085 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 12:39:09.553438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 12:39:09.573873 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 12:39:09.578994 kernel: ACPI: button: Power Button [PWRF] May 15 12:39:09.600016 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 12:39:09.646260 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 12:39:09.673002 kernel: EDAC MC: Ver: 3.0.0 May 15 12:39:09.680142 systemd-networkd[1466]: lo: Link UP May 15 12:39:09.680478 systemd-networkd[1466]: lo: Gained carrier May 15 12:39:09.683222 systemd-networkd[1466]: Enumeration completed May 15 12:39:09.683688 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:39:09.684703 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:39:09.685052 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:39:09.687218 systemd-networkd[1466]: eth0: Link UP May 15 12:39:09.687376 systemd-networkd[1466]: eth0: Gained carrier May 15 12:39:09.687390 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:39:09.687908 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 12:39:09.695099 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 12:39:09.751383 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 12:39:09.755236 systemd-resolved[1413]: Positive Trust Anchors: May 15 12:39:09.755502 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:39:09.755533 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:39:09.763038 systemd-resolved[1413]: Defaulting to hostname 'linux'. May 15 12:39:09.765330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:39:09.766078 systemd[1]: Reached target network.target - Network. May 15 12:39:09.766627 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:39:09.794246 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 12:39:09.796096 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:39:09.796715 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 12:39:09.798096 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 12:39:09.798993 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 12:39:09.800040 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 12:39:09.801021 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 12:39:09.801046 systemd[1]: Reached target paths.target - Path Units. May 15 12:39:09.802039 systemd[1]: Reached target time-set.target - System Time Set. May 15 12:39:09.803221 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 12:39:09.805161 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 12:39:09.805777 systemd[1]: Reached target timers.target - Timer Units. May 15 12:39:09.808376 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 12:39:09.813062 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 12:39:09.817955 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 12:39:09.819152 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 12:39:09.820032 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 12:39:09.830724 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 12:39:09.831866 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 12:39:09.862242 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 12:39:09.868538 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:39:09.869166 systemd[1]: Reached target basic.target - Basic System. May 15 12:39:09.869752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 12:39:09.869834 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 12:39:09.871613 systemd[1]: Starting containerd.service - containerd container runtime... May 15 12:39:09.877040 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 12:39:09.882342 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 12:39:09.885216 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 12:39:09.887874 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 12:39:09.895584 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 12:39:09.897032 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 12:39:09.898849 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 12:39:09.914438 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 12:39:09.918043 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 12:39:09.920531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 12:39:09.923212 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 12:39:09.929073 jq[1531]: false May 15 12:39:09.933600 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 12:39:09.936169 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 12:39:09.937227 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 12:39:09.940258 systemd[1]: Starting update-engine.service - Update Engine... May 15 12:39:09.942069 oslogin_cache_refresh[1533]: Refreshing passwd entry cache May 15 12:39:09.946501 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing passwd entry cache May 15 12:39:09.946501 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting users, quitting May 15 12:39:09.946501 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:39:09.946501 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing group entry cache May 15 12:39:09.943653 oslogin_cache_refresh[1533]: Failure getting users, quitting May 15 12:39:09.943665 oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:39:09.943700 oslogin_cache_refresh[1533]: Refreshing group entry cache May 15 12:39:09.950416 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting groups, quitting May 15 12:39:09.950416 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:39:09.948071 oslogin_cache_refresh[1533]: Failure getting groups, quitting May 15 12:39:09.947327 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 12:39:09.948082 oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:39:09.964167 extend-filesystems[1532]: Found loop4 May 15 12:39:09.971060 extend-filesystems[1532]: Found loop5 May 15 12:39:09.971060 extend-filesystems[1532]: Found loop6 May 15 12:39:09.971060 extend-filesystems[1532]: Found loop7 May 15 12:39:09.971060 extend-filesystems[1532]: Found sda May 15 12:39:09.971060 extend-filesystems[1532]: Found sda1 May 15 12:39:09.971060 extend-filesystems[1532]: Found sda2 May 15 12:39:09.971060 extend-filesystems[1532]: Found sda3 May 15 12:39:09.971060 extend-filesystems[1532]: Found usr May 15 12:39:09.971060 extend-filesystems[1532]: Found sda4 May 15 12:39:09.971060 extend-filesystems[1532]: Found sda6 May 15 12:39:09.971060 extend-filesystems[1532]: Found sda7 May 15 12:39:09.971060 extend-filesystems[1532]: Found sda9 May 15 12:39:09.971060 extend-filesystems[1532]: Checking size of /dev/sda9 May 15 12:39:10.027501 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 15 12:39:09.965048 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 12:39:10.027578 coreos-metadata[1528]: May 15 12:39:10.008 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 12:39:10.031179 extend-filesystems[1532]: Resized partition /dev/sda9 May 15 12:39:09.966419 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 12:39:10.032105 extend-filesystems[1566]: resize2fs 1.47.2 (1-Jan-2025) May 15 12:39:10.034208 jq[1543]: true May 15 12:39:09.966637 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 12:39:09.967552 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 12:39:10.035697 update_engine[1542]: I20250515 12:39:10.034560 1542 main.cc:92] Flatcar Update Engine starting May 15 12:39:09.968293 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 12:39:09.970699 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 12:39:09.971582 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 12:39:09.973442 systemd[1]: motdgen.service: Deactivated successfully. May 15 12:39:09.973677 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 12:39:10.003682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:39:10.041715 tar[1559]: linux-amd64/helm May 15 12:39:10.048176 (ntainerd)[1570]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 12:39:10.057744 jq[1568]: true May 15 12:39:10.085923 dbus-daemon[1529]: [system] SELinux support is enabled May 15 12:39:10.087760 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 12:39:10.090544 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 12:39:10.090599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 12:39:10.093140 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 12:39:10.093164 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 12:39:10.128174 systemd[1]: Started update-engine.service - Update Engine. May 15 12:39:10.131877 update_engine[1542]: I20250515 12:39:10.131716 1542 update_check_scheduler.cc:74] Next update check in 7m23s May 15 12:39:10.135211 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 12:39:10.229675 systemd-logind[1541]: Watching system buttons on /dev/input/event2 (Power Button) May 15 12:39:10.229702 systemd-logind[1541]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 12:39:10.235930 systemd-logind[1541]: New seat seat0. May 15 12:39:10.239958 systemd[1]: Started systemd-logind.service - User Login Management. May 15 12:39:10.247190 systemd-networkd[1466]: eth0: DHCPv4 address 172.236.125.189/24, gateway 172.236.125.1 acquired from 23.215.118.129 May 15 12:39:10.247355 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1466 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 15 12:39:10.251187 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. May 15 12:39:10.265022 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 15 12:39:10.287132 bash[1596]: Updated "/home/core/.ssh/authorized_keys" May 15 12:39:10.330171 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 15 12:39:10.346400 extend-filesystems[1566]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 15 12:39:10.346400 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 10 May 15 12:39:10.346400 extend-filesystems[1566]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 15 12:39:10.477598 extend-filesystems[1532]: Resized filesystem in /dev/sda9 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.363524760Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.379792080Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.49µs" May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.379817810Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.379835110Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.380341200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.380370880Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.380394990Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.380453660Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.380464730Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.380718600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:39:10.529187 containerd[1570]: time="2025-05-15T12:39:10.380731760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:39:10.413421 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 12:39:10.483108 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.hostname1' May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.380742090Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.380749020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.380834590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.381072590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.381109400Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.381117910Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.381161130Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.383196520Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.383264890Z" level=info msg="metadata content store policy set" policy=shared May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.385683960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.385715410Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.385727590Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 12:39:10.529703 containerd[1570]: time="2025-05-15T12:39:10.385741740Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 12:39:10.413665 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 12:39:10.483860 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1603 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385751630Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385760140Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385769890Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385779010Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385787520Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385803740Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385811430Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385831170Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385947220Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385982870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.385997960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.386006950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.386016250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 12:39:10.531767 containerd[1570]: time="2025-05-15T12:39:10.386025130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 12:39:10.457731 systemd-timesyncd[1440]: Contacted time server 99.28.14.242:123 (0.flatcar.pool.ntp.org). May 15 12:39:10.534070 containerd[1570]: time="2025-05-15T12:39:10.386034490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 12:39:10.534070 containerd[1570]: time="2025-05-15T12:39:10.386047220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 12:39:10.534070 containerd[1570]: time="2025-05-15T12:39:10.386057850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 12:39:10.534070 containerd[1570]: time="2025-05-15T12:39:10.386066410Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 12:39:10.534070 containerd[1570]: time="2025-05-15T12:39:10.386075330Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 12:39:10.534070 containerd[1570]: time="2025-05-15T12:39:10.386125340Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 12:39:10.534070 containerd[1570]: time="2025-05-15T12:39:10.386136120Z" level=info msg="Start snapshots syncer" May 15 12:39:10.534070 containerd[1570]: time="2025-05-15T12:39:10.386154710Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 12:39:10.457777 systemd-timesyncd[1440]: Initial clock synchronization to Thu 2025-05-15 12:39:10.855264 UTC. May 15 12:39:10.534234 containerd[1570]: time="2025-05-15T12:39:10.386321270Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 12:39:10.534234 containerd[1570]: time="2025-05-15T12:39:10.386384830Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389028840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389157220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389183710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389198550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389215120Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389232200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389242700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389251400Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389273900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389287590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389305770Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389359220Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389376340Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:39:10.534335 containerd[1570]: time="2025-05-15T12:39:10.389390060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389465020Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389476280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389492790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389514490Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389538600Z" level=info msg="runtime interface created" May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389545710Z" level=info msg="created NRI interface" May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389553020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389563190Z" level=info msg="Connect containerd service" May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.389585980Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 12:39:10.534593 containerd[1570]: time="2025-05-15T12:39:10.390624850Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:39:10.550608 containerd[1570]: time="2025-05-15T12:39:10.550175130Z" level=info msg="Start subscribing containerd event" May 15 12:39:10.550608 containerd[1570]: time="2025-05-15T12:39:10.550219900Z" level=info msg="Start recovering state" May 15 12:39:10.553161 containerd[1570]: time="2025-05-15T12:39:10.553063150Z" level=info msg="Start event monitor" May 15 12:39:10.553161 containerd[1570]: time="2025-05-15T12:39:10.553104210Z" level=info msg="Start cni network conf syncer for default" May 15 12:39:10.553161 containerd[1570]: time="2025-05-15T12:39:10.553111330Z" level=info msg="Start streaming server" May 15 12:39:10.553161 containerd[1570]: time="2025-05-15T12:39:10.553119330Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 12:39:10.553161 containerd[1570]: time="2025-05-15T12:39:10.553125930Z" level=info msg="runtime interface starting up..." May 15 12:39:10.553161 containerd[1570]: time="2025-05-15T12:39:10.553131130Z" level=info msg="starting plugins..." May 15 12:39:10.555467 containerd[1570]: time="2025-05-15T12:39:10.553144550Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 12:39:10.556158 containerd[1570]: time="2025-05-15T12:39:10.556055070Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 12:39:10.559978 containerd[1570]: time="2025-05-15T12:39:10.556139410Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 12:39:10.560190 containerd[1570]: time="2025-05-15T12:39:10.560150170Z" level=info msg="containerd successfully booted in 0.199850s" May 15 12:39:10.670743 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 15 12:39:10.671570 systemd[1]: Started containerd.service - containerd container runtime. May 15 12:39:10.673291 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 12:39:10.674807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:39:10.722526 systemd[1]: Starting polkit.service - Authorization Manager... May 15 12:39:10.726059 systemd[1]: Starting sshkeys.service... May 15 12:39:10.753168 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 12:39:10.759873 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 12:39:10.762197 locksmithd[1585]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 12:39:10.774122 systemd-networkd[1466]: eth0: Gained IPv6LL May 15 12:39:10.780096 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 12:39:10.781571 systemd[1]: Reached target network-online.target - Network is Online. May 15 12:39:10.786172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:39:10.794159 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 12:39:10.855776 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 12:39:10.891400 coreos-metadata[1630]: May 15 12:39:10.890 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 12:39:10.898742 tar[1559]: linux-amd64/LICENSE May 15 12:39:10.898742 tar[1559]: linux-amd64/README.md May 15 12:39:10.915867 polkitd[1628]: Started polkitd version 126 May 15 12:39:10.924480 polkitd[1628]: Loading rules from directory /etc/polkit-1/rules.d May 15 12:39:10.924753 polkitd[1628]: Loading rules from directory /run/polkit-1/rules.d May 15 12:39:10.924798 polkitd[1628]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 12:39:10.925879 polkitd[1628]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 15 12:39:10.925910 polkitd[1628]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 12:39:10.925946 polkitd[1628]: Loading rules from directory /usr/share/polkit-1/rules.d May 15 12:39:10.926550 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 12:39:10.927031 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 12:39:10.928739 polkitd[1628]: Finished loading, compiling and executing 2 rules May 15 12:39:10.928930 systemd[1]: Started polkit.service - Authorization Manager. May 15 12:39:10.932153 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 15 12:39:10.932531 polkitd[1628]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 15 12:39:10.948416 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 12:39:10.950924 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 12:39:10.953235 systemd-hostnamed[1603]: Hostname set to <172-236-125-189> (transient) May 15 12:39:10.953775 systemd-resolved[1413]: System hostname changed to '172-236-125-189'. May 15 12:39:10.965436 systemd[1]: issuegen.service: Deactivated successfully. May 15 12:39:10.965792 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 12:39:10.970690 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 12:39:10.987403 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 12:39:10.989644 coreos-metadata[1630]: May 15 12:39:10.989 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 15 12:39:10.992140 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 12:39:10.996256 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 12:39:10.997059 systemd[1]: Reached target getty.target - Login Prompts. May 15 12:39:11.026309 coreos-metadata[1528]: May 15 12:39:11.026 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 15 12:39:11.137469 coreos-metadata[1630]: May 15 12:39:11.137 INFO Fetch successful May 15 12:39:11.159185 coreos-metadata[1528]: May 15 12:39:11.159 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 15 12:39:11.162394 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" May 15 12:39:11.166108 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 12:39:11.171144 systemd[1]: Finished sshkeys.service. May 15 12:39:11.352224 coreos-metadata[1528]: May 15 12:39:11.352 INFO Fetch successful May 15 12:39:11.352369 coreos-metadata[1528]: May 15 12:39:11.352 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 15 12:39:11.622992 coreos-metadata[1528]: May 15 12:39:11.622 INFO Fetch successful May 15 12:39:11.694135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:39:11.698678 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:39:11.733155 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 12:39:11.734978 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 12:39:11.736276 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 12:39:11.737256 systemd[1]: Startup finished in 2.840s (kernel) + 7.621s (initrd) + 4.530s (userspace) = 14.992s. May 15 12:39:12.266551 kubelet[1694]: E0515 12:39:12.266463 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:39:12.270394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:39:12.270578 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:39:12.270938 systemd[1]: kubelet.service: Consumed 803ms CPU time, 241.8M memory peak. May 15 12:39:14.378837 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 12:39:14.380228 systemd[1]: Started sshd@0-172.236.125.189:22-139.178.89.65:52204.service - OpenSSH per-connection server daemon (139.178.89.65:52204). May 15 12:39:14.747379 sshd[1718]: Accepted publickey for core from 139.178.89.65 port 52204 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:39:14.749856 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:39:14.757157 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 12:39:14.758693 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 12:39:14.767899 systemd-logind[1541]: New session 1 of user core. May 15 12:39:14.779244 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 12:39:14.782385 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 12:39:14.796725 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 12:39:14.799715 systemd-logind[1541]: New session c1 of user core. May 15 12:39:14.932385 systemd[1722]: Queued start job for default target default.target. May 15 12:39:14.940363 systemd[1722]: Created slice app.slice - User Application Slice. May 15 12:39:14.940396 systemd[1722]: Reached target paths.target - Paths. May 15 12:39:14.940443 systemd[1722]: Reached target timers.target - Timers. May 15 12:39:14.942166 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 12:39:14.956123 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 12:39:14.956267 systemd[1722]: Reached target sockets.target - Sockets. May 15 12:39:14.956313 systemd[1722]: Reached target basic.target - Basic System. May 15 12:39:14.956357 systemd[1722]: Reached target default.target - Main User Target. May 15 12:39:14.956392 systemd[1722]: Startup finished in 148ms. May 15 12:39:14.956581 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 12:39:14.964174 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 12:39:15.230948 systemd[1]: Started sshd@1-172.236.125.189:22-139.178.89.65:52220.service - OpenSSH per-connection server daemon (139.178.89.65:52220). May 15 12:39:15.576594 sshd[1733]: Accepted publickey for core from 139.178.89.65 port 52220 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:39:15.578312 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:39:15.584880 systemd-logind[1541]: New session 2 of user core. May 15 12:39:15.592232 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 12:39:15.825902 sshd[1735]: Connection closed by 139.178.89.65 port 52220 May 15 12:39:15.826507 sshd-session[1733]: pam_unix(sshd:session): session closed for user core May 15 12:39:15.831220 systemd-logind[1541]: Session 2 logged out. Waiting for processes to exit. May 15 12:39:15.831923 systemd[1]: sshd@1-172.236.125.189:22-139.178.89.65:52220.service: Deactivated successfully. May 15 12:39:15.833819 systemd[1]: session-2.scope: Deactivated successfully. May 15 12:39:15.835700 systemd-logind[1541]: Removed session 2. May 15 12:39:15.899578 systemd[1]: Started sshd@2-172.236.125.189:22-139.178.89.65:52222.service - OpenSSH per-connection server daemon (139.178.89.65:52222). May 15 12:39:16.254285 sshd[1741]: Accepted publickey for core from 139.178.89.65 port 52222 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:39:16.255667 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:39:16.260620 systemd-logind[1541]: New session 3 of user core. May 15 12:39:16.268129 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 12:39:16.509933 sshd[1743]: Connection closed by 139.178.89.65 port 52222 May 15 12:39:16.511029 sshd-session[1741]: pam_unix(sshd:session): session closed for user core May 15 12:39:16.514950 systemd-logind[1541]: Session 3 logged out. Waiting for processes to exit. May 15 12:39:16.515558 systemd[1]: sshd@2-172.236.125.189:22-139.178.89.65:52222.service: Deactivated successfully. May 15 12:39:16.517303 systemd[1]: session-3.scope: Deactivated successfully. May 15 12:39:16.519152 systemd-logind[1541]: Removed session 3. May 15 12:39:16.577918 systemd[1]: Started sshd@3-172.236.125.189:22-139.178.89.65:51740.service - OpenSSH per-connection server daemon (139.178.89.65:51740). May 15 12:39:16.927749 sshd[1749]: Accepted publickey for core from 139.178.89.65 port 51740 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:39:16.929233 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:39:16.933543 systemd-logind[1541]: New session 4 of user core. May 15 12:39:16.939142 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 12:39:17.186348 sshd[1751]: Connection closed by 139.178.89.65 port 51740 May 15 12:39:17.187071 sshd-session[1749]: pam_unix(sshd:session): session closed for user core May 15 12:39:17.191607 systemd-logind[1541]: Session 4 logged out. Waiting for processes to exit. May 15 12:39:17.192313 systemd[1]: sshd@3-172.236.125.189:22-139.178.89.65:51740.service: Deactivated successfully. May 15 12:39:17.194364 systemd[1]: session-4.scope: Deactivated successfully. May 15 12:39:17.196293 systemd-logind[1541]: Removed session 4. May 15 12:39:17.244149 systemd[1]: Started sshd@4-172.236.125.189:22-139.178.89.65:51756.service - OpenSSH per-connection server daemon (139.178.89.65:51756). May 15 12:39:17.578953 sshd[1757]: Accepted publickey for core from 139.178.89.65 port 51756 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:39:17.580569 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:39:17.585042 systemd-logind[1541]: New session 5 of user core. May 15 12:39:17.596120 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 12:39:17.784148 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 12:39:17.784448 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:39:17.799455 sudo[1760]: pam_unix(sudo:session): session closed for user root May 15 12:39:17.850283 sshd[1759]: Connection closed by 139.178.89.65 port 51756 May 15 12:39:17.851149 sshd-session[1757]: pam_unix(sshd:session): session closed for user core May 15 12:39:17.854942 systemd-logind[1541]: Session 5 logged out. Waiting for processes to exit. May 15 12:39:17.855695 systemd[1]: sshd@4-172.236.125.189:22-139.178.89.65:51756.service: Deactivated successfully. May 15 12:39:17.857264 systemd[1]: session-5.scope: Deactivated successfully. May 15 12:39:17.859267 systemd-logind[1541]: Removed session 5. May 15 12:39:17.919381 systemd[1]: Started sshd@5-172.236.125.189:22-139.178.89.65:51772.service - OpenSSH per-connection server daemon (139.178.89.65:51772). May 15 12:39:18.273062 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 51772 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:39:18.274500 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:39:18.279613 systemd-logind[1541]: New session 6 of user core. May 15 12:39:18.285103 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 12:39:18.478841 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 12:39:18.479158 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:39:18.483238 sudo[1770]: pam_unix(sudo:session): session closed for user root May 15 12:39:18.488364 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 12:39:18.488644 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:39:18.497558 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:39:18.539346 augenrules[1792]: No rules May 15 12:39:18.540800 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:39:18.541096 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:39:18.542274 sudo[1769]: pam_unix(sudo:session): session closed for user root May 15 12:39:18.595857 sshd[1768]: Connection closed by 139.178.89.65 port 51772 May 15 12:39:18.596374 sshd-session[1766]: pam_unix(sshd:session): session closed for user core May 15 12:39:18.600216 systemd[1]: sshd@5-172.236.125.189:22-139.178.89.65:51772.service: Deactivated successfully. May 15 12:39:18.601683 systemd[1]: session-6.scope: Deactivated successfully. May 15 12:39:18.602541 systemd-logind[1541]: Session 6 logged out. Waiting for processes to exit. May 15 12:39:18.603776 systemd-logind[1541]: Removed session 6. May 15 12:39:18.670756 systemd[1]: Started sshd@6-172.236.125.189:22-139.178.89.65:51786.service - OpenSSH per-connection server daemon (139.178.89.65:51786). May 15 12:39:19.033624 sshd[1801]: Accepted publickey for core from 139.178.89.65 port 51786 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:39:19.035338 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:39:19.042174 systemd[1]: Started sshd@7-172.236.125.189:22-80.94.95.115:38516.service - OpenSSH per-connection server daemon (80.94.95.115:38516). May 15 12:39:19.045690 systemd-logind[1541]: New session 7 of user core. May 15 12:39:19.052298 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 12:39:19.237140 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 12:39:19.237439 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:39:19.507078 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 12:39:19.517295 (dockerd)[1824]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 12:39:19.700131 dockerd[1824]: time="2025-05-15T12:39:19.700071154Z" level=info msg="Starting up" May 15 12:39:19.701627 dockerd[1824]: time="2025-05-15T12:39:19.701601259Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 12:39:19.750694 dockerd[1824]: time="2025-05-15T12:39:19.750474101Z" level=info msg="Loading containers: start." May 15 12:39:19.760015 kernel: Initializing XFRM netlink socket May 15 12:39:19.986146 systemd-networkd[1466]: docker0: Link UP May 15 12:39:19.988296 dockerd[1824]: time="2025-05-15T12:39:19.988263315Z" level=info msg="Loading containers: done." May 15 12:39:20.001266 dockerd[1824]: time="2025-05-15T12:39:20.001228546Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 12:39:20.001387 dockerd[1824]: time="2025-05-15T12:39:20.001294000Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 12:39:20.001508 dockerd[1824]: time="2025-05-15T12:39:20.001483645Z" level=info msg="Initializing buildkit" May 15 12:39:20.019367 dockerd[1824]: time="2025-05-15T12:39:20.019347823Z" level=info msg="Completed buildkit initialization" May 15 12:39:20.025775 dockerd[1824]: time="2025-05-15T12:39:20.025732999Z" level=info msg="Daemon has completed initialization" May 15 12:39:20.025991 dockerd[1824]: time="2025-05-15T12:39:20.025941563Z" level=info msg="API listen on /run/docker.sock" May 15 12:39:20.026280 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 12:39:20.616063 containerd[1570]: time="2025-05-15T12:39:20.616014529Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 12:39:20.672809 sshd[1804]: Invalid user ubnt from 80.94.95.115 port 38516 May 15 12:39:20.723822 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck607918418-merged.mount: Deactivated successfully. May 15 12:39:20.856915 sshd[1804]: Connection closed by invalid user ubnt 80.94.95.115 port 38516 [preauth] May 15 12:39:20.859775 systemd[1]: sshd@7-172.236.125.189:22-80.94.95.115:38516.service: Deactivated successfully. May 15 12:39:21.581835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180348586.mount: Deactivated successfully. May 15 12:39:22.383919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 12:39:22.386351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:39:22.541524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:39:22.553554 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:39:22.594775 kubelet[2100]: E0515 12:39:22.594735 2100 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:39:22.600524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:39:22.600816 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:39:22.601424 systemd[1]: kubelet.service: Consumed 163ms CPU time, 95.3M memory peak. May 15 12:39:22.966650 containerd[1570]: time="2025-05-15T12:39:22.966599041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:22.967434 containerd[1570]: time="2025-05-15T12:39:22.967416749Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 15 12:39:22.968104 containerd[1570]: time="2025-05-15T12:39:22.968078208Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:22.970386 containerd[1570]: time="2025-05-15T12:39:22.970363704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:22.971379 containerd[1570]: time="2025-05-15T12:39:22.971188675Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.355137548s" May 15 12:39:22.971379 containerd[1570]: time="2025-05-15T12:39:22.971215994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 12:39:22.987087 containerd[1570]: time="2025-05-15T12:39:22.987017102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 12:39:24.913253 containerd[1570]: time="2025-05-15T12:39:24.913176947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:24.914023 containerd[1570]: time="2025-05-15T12:39:24.913962887Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 15 12:39:24.914850 containerd[1570]: time="2025-05-15T12:39:24.914787847Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:24.920227 containerd[1570]: time="2025-05-15T12:39:24.920166566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:24.921147 containerd[1570]: time="2025-05-15T12:39:24.921045828Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.933860025s" May 15 12:39:24.921147 containerd[1570]: time="2025-05-15T12:39:24.921098869Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 12:39:25.007130 containerd[1570]: time="2025-05-15T12:39:25.007051403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 12:39:26.864748 containerd[1570]: time="2025-05-15T12:39:26.863830671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:26.864748 containerd[1570]: time="2025-05-15T12:39:26.864685149Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 15 12:39:26.865289 containerd[1570]: time="2025-05-15T12:39:26.865247981Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:26.867062 containerd[1570]: time="2025-05-15T12:39:26.867041349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:26.867895 containerd[1570]: time="2025-05-15T12:39:26.867872955Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.860778571s" May 15 12:39:26.867958 containerd[1570]: time="2025-05-15T12:39:26.867945599Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 12:39:26.917445 containerd[1570]: time="2025-05-15T12:39:26.917397288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 12:39:28.593953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746582403.mount: Deactivated successfully. May 15 12:39:29.588570 containerd[1570]: time="2025-05-15T12:39:29.588531259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:29.589556 containerd[1570]: time="2025-05-15T12:39:29.589497726Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 15 12:39:29.589906 containerd[1570]: time="2025-05-15T12:39:29.589856773Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:29.591313 containerd[1570]: time="2025-05-15T12:39:29.591266766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:29.592048 containerd[1570]: time="2025-05-15T12:39:29.591926449Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.674493126s" May 15 12:39:29.592048 containerd[1570]: time="2025-05-15T12:39:29.591952044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 12:39:29.633052 containerd[1570]: time="2025-05-15T12:39:29.633026205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 12:39:30.261776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978715311.mount: Deactivated successfully. May 15 12:39:31.626871 containerd[1570]: time="2025-05-15T12:39:31.626819779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:31.627791 containerd[1570]: time="2025-05-15T12:39:31.627676067Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 12:39:31.628241 containerd[1570]: time="2025-05-15T12:39:31.628216442Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:31.630157 containerd[1570]: time="2025-05-15T12:39:31.630133861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:31.631004 containerd[1570]: time="2025-05-15T12:39:31.630888500Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.997710815s" May 15 12:39:31.631004 containerd[1570]: time="2025-05-15T12:39:31.630913375Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 12:39:31.695679 containerd[1570]: time="2025-05-15T12:39:31.695647197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 12:39:32.271494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642773225.mount: Deactivated successfully. May 15 12:39:32.275346 containerd[1570]: time="2025-05-15T12:39:32.275285165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:32.275990 containerd[1570]: time="2025-05-15T12:39:32.275950032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 15 12:39:32.276515 containerd[1570]: time="2025-05-15T12:39:32.276470345Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:32.277985 containerd[1570]: time="2025-05-15T12:39:32.277842015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:32.278589 containerd[1570]: time="2025-05-15T12:39:32.278569470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 582.89139ms" May 15 12:39:32.278658 containerd[1570]: time="2025-05-15T12:39:32.278644535Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 12:39:32.389302 containerd[1570]: time="2025-05-15T12:39:32.389208020Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 12:39:32.634047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 12:39:32.635775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:39:32.807948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:39:32.818267 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:39:32.907925 kubelet[2224]: E0515 12:39:32.907815 2224 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:39:32.911651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:39:32.911836 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:39:32.912258 systemd[1]: kubelet.service: Consumed 213ms CPU time, 96.4M memory peak. May 15 12:39:33.062895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617344612.mount: Deactivated successfully. May 15 12:39:35.288528 containerd[1570]: time="2025-05-15T12:39:35.288422843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:35.289457 containerd[1570]: time="2025-05-15T12:39:35.289424951Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 15 12:39:35.290439 containerd[1570]: time="2025-05-15T12:39:35.289917183Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:35.292522 containerd[1570]: time="2025-05-15T12:39:35.292487239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:39:35.293616 containerd[1570]: time="2025-05-15T12:39:35.293578788Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.90412219s" May 15 12:39:35.293697 containerd[1570]: time="2025-05-15T12:39:35.293681294Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 12:39:37.522281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:39:37.523058 systemd[1]: kubelet.service: Consumed 213ms CPU time, 96.4M memory peak. May 15 12:39:37.527185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:39:37.549938 systemd[1]: Reload requested from client PID 2363 ('systemctl') (unit session-7.scope)... May 15 12:39:37.550131 systemd[1]: Reloading... May 15 12:39:37.707038 zram_generator::config[2407]: No configuration found. May 15 12:39:37.802029 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:39:37.912681 systemd[1]: Reloading finished in 362 ms. May 15 12:39:37.969268 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 12:39:37.969383 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 12:39:37.969739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:39:37.969792 systemd[1]: kubelet.service: Consumed 132ms CPU time, 83.6M memory peak. May 15 12:39:37.972612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:39:38.129964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:39:38.140372 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:39:38.196725 kubelet[2462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:39:38.196725 kubelet[2462]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 12:39:38.196725 kubelet[2462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:39:38.198987 kubelet[2462]: I0515 12:39:38.198656 2462 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:39:38.614001 kubelet[2462]: I0515 12:39:38.613928 2462 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 12:39:38.614001 kubelet[2462]: I0515 12:39:38.613961 2462 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:39:38.614376 kubelet[2462]: I0515 12:39:38.614202 2462 server.go:927] "Client rotation is on, will bootstrap in background" May 15 12:39:38.627797 kubelet[2462]: I0515 12:39:38.627766 2462 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:39:38.640003 kubelet[2462]: E0515 12:39:38.639735 2462 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.236.125.189:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.655740 kubelet[2462]: I0515 12:39:38.655711 2462 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:39:38.660073 kubelet[2462]: I0515 12:39:38.660012 2462 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:39:38.660262 kubelet[2462]: I0515 12:39:38.660060 2462 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-125-189","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 12:39:38.661018 kubelet[2462]: I0515 12:39:38.660963 2462 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:39:38.661018 kubelet[2462]: I0515 12:39:38.661005 2462 container_manager_linux.go:301] "Creating device plugin manager" May 15 12:39:38.661189 kubelet[2462]: I0515 12:39:38.661157 2462 state_mem.go:36] "Initialized new in-memory state store" May 15 12:39:38.662431 kubelet[2462]: I0515 12:39:38.662061 2462 kubelet.go:400] "Attempting to sync node with API server" May 15 12:39:38.662431 kubelet[2462]: I0515 12:39:38.662189 2462 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:39:38.662431 kubelet[2462]: I0515 12:39:38.662213 2462 kubelet.go:312] "Adding apiserver pod source" May 15 12:39:38.662431 kubelet[2462]: I0515 12:39:38.662234 2462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:39:38.663656 kubelet[2462]: W0515 12:39:38.662741 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.125.189:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-125-189&limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.663656 kubelet[2462]: E0515 12:39:38.662845 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.236.125.189:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-125-189&limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.671894 kubelet[2462]: W0515 12:39:38.671824 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.125.189:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.671894 kubelet[2462]: E0515 12:39:38.671883 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.236.125.189:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.672997 kubelet[2462]: I0515 12:39:38.672939 2462 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:39:38.675111 kubelet[2462]: I0515 12:39:38.674375 2462 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:39:38.675111 kubelet[2462]: W0515 12:39:38.674452 2462 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 12:39:38.675315 kubelet[2462]: I0515 12:39:38.675302 2462 server.go:1264] "Started kubelet" May 15 12:39:38.677475 kubelet[2462]: I0515 12:39:38.677088 2462 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:39:38.679761 kubelet[2462]: I0515 12:39:38.679470 2462 server.go:455] "Adding debug handlers to kubelet server" May 15 12:39:38.680084 kubelet[2462]: I0515 12:39:38.680033 2462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:39:38.680391 kubelet[2462]: I0515 12:39:38.680376 2462 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:39:38.680835 kubelet[2462]: E0515 12:39:38.680654 2462 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.125.189:6443/api/v1/namespaces/default/events\": dial tcp 172.236.125.189:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-125-189.183fb3b6ec70d9e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-125-189,UID:172-236-125-189,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-125-189,},FirstTimestamp:2025-05-15 12:39:38.675280359 +0000 UTC m=+0.530129942,LastTimestamp:2025-05-15 12:39:38.675280359 +0000 UTC m=+0.530129942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-125-189,}" May 15 12:39:38.682784 kubelet[2462]: I0515 12:39:38.682643 2462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:39:38.685997 kubelet[2462]: E0515 12:39:38.685940 2462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-236-125-189\" not found" May 15 12:39:38.686052 kubelet[2462]: I0515 12:39:38.686013 2462 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 12:39:38.686139 kubelet[2462]: I0515 12:39:38.686113 2462 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 12:39:38.686627 kubelet[2462]: I0515 12:39:38.686181 2462 reconciler.go:26] "Reconciler: start to sync state" May 15 12:39:38.686627 kubelet[2462]: W0515 12:39:38.686524 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.125.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.686627 kubelet[2462]: E0515 12:39:38.686566 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.236.125.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.687163 kubelet[2462]: E0515 12:39:38.687123 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.125.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-125-189?timeout=10s\": dial tcp 172.236.125.189:6443: connect: connection refused" interval="200ms" May 15 12:39:38.687780 kubelet[2462]: E0515 12:39:38.687635 2462 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:39:38.687928 kubelet[2462]: I0515 12:39:38.687892 2462 factory.go:221] Registration of the systemd container factory successfully May 15 12:39:38.688021 kubelet[2462]: I0515 12:39:38.687994 2462 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:39:38.689812 kubelet[2462]: I0515 12:39:38.689780 2462 factory.go:221] Registration of the containerd container factory successfully May 15 12:39:38.704009 kubelet[2462]: I0515 12:39:38.703807 2462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:39:38.705173 kubelet[2462]: I0515 12:39:38.705136 2462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:39:38.705173 kubelet[2462]: I0515 12:39:38.705170 2462 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 12:39:38.705275 kubelet[2462]: I0515 12:39:38.705196 2462 kubelet.go:2337] "Starting kubelet main sync loop" May 15 12:39:38.705310 kubelet[2462]: E0515 12:39:38.705269 2462 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:39:38.713493 kubelet[2462]: W0515 12:39:38.713336 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.125.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.713493 kubelet[2462]: E0515 12:39:38.713392 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.236.125.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:38.722911 kubelet[2462]: I0515 12:39:38.722584 2462 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 12:39:38.722911 kubelet[2462]: I0515 12:39:38.722601 2462 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 12:39:38.722911 kubelet[2462]: I0515 12:39:38.722629 2462 state_mem.go:36] "Initialized new in-memory state store" May 15 12:39:38.724880 kubelet[2462]: I0515 12:39:38.724865 2462 policy_none.go:49] "None policy: Start" May 15 12:39:38.726198 kubelet[2462]: I0515 12:39:38.725690 2462 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 12:39:38.726198 kubelet[2462]: I0515 12:39:38.725712 2462 state_mem.go:35] "Initializing new in-memory state store" May 15 12:39:38.733817 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 12:39:38.746780 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 12:39:38.751102 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 12:39:38.762163 kubelet[2462]: I0515 12:39:38.762134 2462 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:39:38.762514 kubelet[2462]: I0515 12:39:38.762461 2462 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:39:38.763371 kubelet[2462]: I0515 12:39:38.763355 2462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:39:38.766504 kubelet[2462]: E0515 12:39:38.766473 2462 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-125-189\" not found" May 15 12:39:38.788958 kubelet[2462]: I0515 12:39:38.788926 2462 kubelet_node_status.go:73] "Attempting to register node" node="172-236-125-189" May 15 12:39:38.789797 kubelet[2462]: E0515 12:39:38.789754 2462 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.236.125.189:6443/api/v1/nodes\": dial tcp 172.236.125.189:6443: connect: connection refused" node="172-236-125-189" May 15 12:39:38.805953 kubelet[2462]: I0515 12:39:38.805860 2462 topology_manager.go:215] "Topology Admit Handler" podUID="af4e732f44e9f04ff273825713f205f7" podNamespace="kube-system" podName="kube-controller-manager-172-236-125-189" May 15 12:39:38.808140 kubelet[2462]: I0515 12:39:38.808111 2462 topology_manager.go:215] "Topology Admit Handler" podUID="d767d1f2c34fa8c80517b1e96d67d267" podNamespace="kube-system" podName="kube-scheduler-172-236-125-189" May 15 12:39:38.810783 kubelet[2462]: I0515 12:39:38.810524 2462 topology_manager.go:215] "Topology Admit Handler" podUID="d1772abc87cee22384d33e8e74940300" podNamespace="kube-system" podName="kube-apiserver-172-236-125-189" May 15 12:39:38.817526 systemd[1]: Created slice kubepods-burstable-podaf4e732f44e9f04ff273825713f205f7.slice - libcontainer container kubepods-burstable-podaf4e732f44e9f04ff273825713f205f7.slice. May 15 12:39:38.835200 systemd[1]: Created slice kubepods-burstable-podd767d1f2c34fa8c80517b1e96d67d267.slice - libcontainer container kubepods-burstable-podd767d1f2c34fa8c80517b1e96d67d267.slice. May 15 12:39:38.849184 systemd[1]: Created slice kubepods-burstable-podd1772abc87cee22384d33e8e74940300.slice - libcontainer container kubepods-burstable-podd1772abc87cee22384d33e8e74940300.slice. May 15 12:39:38.887128 kubelet[2462]: I0515 12:39:38.887066 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1772abc87cee22384d33e8e74940300-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-125-189\" (UID: \"d1772abc87cee22384d33e8e74940300\") " pod="kube-system/kube-apiserver-172-236-125-189" May 15 12:39:38.887128 kubelet[2462]: I0515 12:39:38.887113 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-ca-certs\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:38.887128 kubelet[2462]: I0515 12:39:38.887135 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-flexvolume-dir\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:38.887332 kubelet[2462]: I0515 12:39:38.887159 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-k8s-certs\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:38.887332 kubelet[2462]: I0515 12:39:38.887180 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:38.887332 kubelet[2462]: I0515 12:39:38.887232 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d767d1f2c34fa8c80517b1e96d67d267-kubeconfig\") pod \"kube-scheduler-172-236-125-189\" (UID: \"d767d1f2c34fa8c80517b1e96d67d267\") " pod="kube-system/kube-scheduler-172-236-125-189" May 15 12:39:38.887332 kubelet[2462]: I0515 12:39:38.887251 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1772abc87cee22384d33e8e74940300-ca-certs\") pod \"kube-apiserver-172-236-125-189\" (UID: \"d1772abc87cee22384d33e8e74940300\") " pod="kube-system/kube-apiserver-172-236-125-189" May 15 12:39:38.887332 kubelet[2462]: I0515 12:39:38.887267 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1772abc87cee22384d33e8e74940300-k8s-certs\") pod \"kube-apiserver-172-236-125-189\" (UID: \"d1772abc87cee22384d33e8e74940300\") " pod="kube-system/kube-apiserver-172-236-125-189" May 15 12:39:38.887449 kubelet[2462]: I0515 12:39:38.887284 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-kubeconfig\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:38.887828 kubelet[2462]: E0515 12:39:38.887780 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.125.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-125-189?timeout=10s\": dial tcp 172.236.125.189:6443: connect: connection refused" interval="400ms" May 15 12:39:38.992394 kubelet[2462]: I0515 12:39:38.992358 2462 kubelet_node_status.go:73] "Attempting to register node" node="172-236-125-189" May 15 12:39:38.992679 kubelet[2462]: E0515 12:39:38.992648 2462 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.236.125.189:6443/api/v1/nodes\": dial tcp 172.236.125.189:6443: connect: connection refused" node="172-236-125-189" May 15 12:39:39.129947 kubelet[2462]: E0515 12:39:39.129891 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:39.130929 containerd[1570]: time="2025-05-15T12:39:39.130889362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-125-189,Uid:af4e732f44e9f04ff273825713f205f7,Namespace:kube-system,Attempt:0,}" May 15 12:39:39.138284 kubelet[2462]: E0515 12:39:39.138186 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:39.138794 containerd[1570]: time="2025-05-15T12:39:39.138690600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-125-189,Uid:d767d1f2c34fa8c80517b1e96d67d267,Namespace:kube-system,Attempt:0,}" May 15 12:39:39.152694 kubelet[2462]: E0515 12:39:39.152645 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:39.153755 containerd[1570]: time="2025-05-15T12:39:39.153701908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-125-189,Uid:d1772abc87cee22384d33e8e74940300,Namespace:kube-system,Attempt:0,}" May 15 12:39:39.289091 kubelet[2462]: E0515 12:39:39.288958 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.125.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-125-189?timeout=10s\": dial tcp 172.236.125.189:6443: connect: connection refused" interval="800ms" May 15 12:39:39.396405 kubelet[2462]: I0515 12:39:39.396191 2462 kubelet_node_status.go:73] "Attempting to register node" node="172-236-125-189" May 15 12:39:39.396750 kubelet[2462]: E0515 12:39:39.396719 2462 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.236.125.189:6443/api/v1/nodes\": dial tcp 172.236.125.189:6443: connect: connection refused" node="172-236-125-189" May 15 12:39:39.545138 kubelet[2462]: W0515 12:39:39.545029 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.125.189:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:39.545288 kubelet[2462]: E0515 12:39:39.545191 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.236.125.189:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:39.614061 kubelet[2462]: W0515 12:39:39.613938 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.125.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:39.614061 kubelet[2462]: E0515 12:39:39.614059 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.236.125.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:39.730331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800729574.mount: Deactivated successfully. May 15 12:39:39.735044 containerd[1570]: time="2025-05-15T12:39:39.735008223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:39:39.736114 containerd[1570]: time="2025-05-15T12:39:39.736065413Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:39:39.737024 containerd[1570]: time="2025-05-15T12:39:39.737000788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 12:39:39.737436 containerd[1570]: time="2025-05-15T12:39:39.737397376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 15 12:39:39.738277 containerd[1570]: time="2025-05-15T12:39:39.738250635Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:39:39.739237 containerd[1570]: time="2025-05-15T12:39:39.739084560Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:39:39.739526 containerd[1570]: time="2025-05-15T12:39:39.739501943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 15 12:39:39.741616 containerd[1570]: time="2025-05-15T12:39:39.741593365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:39:39.742991 containerd[1570]: time="2025-05-15T12:39:39.742633484Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 609.090015ms" May 15 12:39:39.743700 containerd[1570]: time="2025-05-15T12:39:39.743668206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 587.518416ms" May 15 12:39:39.744853 containerd[1570]: time="2025-05-15T12:39:39.744770899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 604.594744ms" May 15 12:39:39.789950 containerd[1570]: time="2025-05-15T12:39:39.789911400Z" level=info msg="connecting to shim dd99bdbb5929d0e175a7ed79eda422f27ee704b27ec8ed2fa7222ca4c205ce90" address="unix:///run/containerd/s/3578dc8c586ae41bae96f3746b650c607a20c821ff23264125e05224d2f295f2" namespace=k8s.io protocol=ttrpc version=3 May 15 12:39:39.795232 containerd[1570]: time="2025-05-15T12:39:39.795153524Z" level=info msg="connecting to shim 39300750980ff76be582b024c12805e9b33075a340c1c388b4a38073fcee3340" address="unix:///run/containerd/s/99f76f6a471aa841fa57dbca03f5c1aca5a45cab88ec6e893002eec77c6de669" namespace=k8s.io protocol=ttrpc version=3 May 15 12:39:39.811018 containerd[1570]: time="2025-05-15T12:39:39.810962204Z" level=info msg="connecting to shim 8412043055968c6d2893d70d535fe9aeac0c8b0f455b3d9af2ce3a159eeb21ec" address="unix:///run/containerd/s/f762e4a8431020047bd51aaa39a6400004a1278e1cc8572e7ca24b70105b7686" namespace=k8s.io protocol=ttrpc version=3 May 15 12:39:39.845306 kubelet[2462]: W0515 12:39:39.845217 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.125.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:39.845748 kubelet[2462]: E0515 12:39:39.845726 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.236.125.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:39.862101 systemd[1]: Started cri-containerd-dd99bdbb5929d0e175a7ed79eda422f27ee704b27ec8ed2fa7222ca4c205ce90.scope - libcontainer container dd99bdbb5929d0e175a7ed79eda422f27ee704b27ec8ed2fa7222ca4c205ce90. May 15 12:39:39.888262 systemd[1]: Started cri-containerd-8412043055968c6d2893d70d535fe9aeac0c8b0f455b3d9af2ce3a159eeb21ec.scope - libcontainer container 8412043055968c6d2893d70d535fe9aeac0c8b0f455b3d9af2ce3a159eeb21ec. May 15 12:39:39.915105 systemd[1]: Started cri-containerd-39300750980ff76be582b024c12805e9b33075a340c1c388b4a38073fcee3340.scope - libcontainer container 39300750980ff76be582b024c12805e9b33075a340c1c388b4a38073fcee3340. May 15 12:39:39.976864 containerd[1570]: time="2025-05-15T12:39:39.976820549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-125-189,Uid:d767d1f2c34fa8c80517b1e96d67d267,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd99bdbb5929d0e175a7ed79eda422f27ee704b27ec8ed2fa7222ca4c205ce90\"" May 15 12:39:39.977824 kubelet[2462]: E0515 12:39:39.977801 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:39.985720 containerd[1570]: time="2025-05-15T12:39:39.985256427Z" level=info msg="CreateContainer within sandbox \"dd99bdbb5929d0e175a7ed79eda422f27ee704b27ec8ed2fa7222ca4c205ce90\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 12:39:39.993179 containerd[1570]: time="2025-05-15T12:39:39.993119538Z" level=info msg="Container e5962504abc973bbaa782d4785f75bc3ba849082b01f29c13f61b25b62b9ca0b: CDI devices from CRI Config.CDIDevices: []" May 15 12:39:39.997624 containerd[1570]: time="2025-05-15T12:39:39.997594196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-125-189,Uid:d1772abc87cee22384d33e8e74940300,Namespace:kube-system,Attempt:0,} returns sandbox id \"8412043055968c6d2893d70d535fe9aeac0c8b0f455b3d9af2ce3a159eeb21ec\"" May 15 12:39:39.998145 kubelet[2462]: E0515 12:39:39.998108 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:40.001347 containerd[1570]: time="2025-05-15T12:39:40.001315672Z" level=info msg="CreateContainer within sandbox \"dd99bdbb5929d0e175a7ed79eda422f27ee704b27ec8ed2fa7222ca4c205ce90\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e5962504abc973bbaa782d4785f75bc3ba849082b01f29c13f61b25b62b9ca0b\"" May 15 12:39:40.001534 containerd[1570]: time="2025-05-15T12:39:40.001482404Z" level=info msg="CreateContainer within sandbox \"8412043055968c6d2893d70d535fe9aeac0c8b0f455b3d9af2ce3a159eeb21ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 12:39:40.005820 containerd[1570]: time="2025-05-15T12:39:40.005738586Z" level=info msg="Container 9ec4c4ecf29cf3b1ee71b88eec5ee3667282322279e2572b736aa68481ed2b12: CDI devices from CRI Config.CDIDevices: []" May 15 12:39:40.013127 containerd[1570]: time="2025-05-15T12:39:40.012712057Z" level=info msg="StartContainer for \"e5962504abc973bbaa782d4785f75bc3ba849082b01f29c13f61b25b62b9ca0b\"" May 15 12:39:40.015274 containerd[1570]: time="2025-05-15T12:39:40.015211822Z" level=info msg="connecting to shim e5962504abc973bbaa782d4785f75bc3ba849082b01f29c13f61b25b62b9ca0b" address="unix:///run/containerd/s/3578dc8c586ae41bae96f3746b650c607a20c821ff23264125e05224d2f295f2" protocol=ttrpc version=3 May 15 12:39:40.021524 containerd[1570]: time="2025-05-15T12:39:40.021489263Z" level=info msg="CreateContainer within sandbox \"8412043055968c6d2893d70d535fe9aeac0c8b0f455b3d9af2ce3a159eeb21ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9ec4c4ecf29cf3b1ee71b88eec5ee3667282322279e2572b736aa68481ed2b12\"" May 15 12:39:40.022426 containerd[1570]: time="2025-05-15T12:39:40.022398003Z" level=info msg="StartContainer for \"9ec4c4ecf29cf3b1ee71b88eec5ee3667282322279e2572b736aa68481ed2b12\"" May 15 12:39:40.024295 containerd[1570]: time="2025-05-15T12:39:40.024150665Z" level=info msg="connecting to shim 9ec4c4ecf29cf3b1ee71b88eec5ee3667282322279e2572b736aa68481ed2b12" address="unix:///run/containerd/s/f762e4a8431020047bd51aaa39a6400004a1278e1cc8572e7ca24b70105b7686" protocol=ttrpc version=3 May 15 12:39:40.035581 containerd[1570]: time="2025-05-15T12:39:40.035523566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-125-189,Uid:af4e732f44e9f04ff273825713f205f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"39300750980ff76be582b024c12805e9b33075a340c1c388b4a38073fcee3340\"" May 15 12:39:40.037091 kubelet[2462]: E0515 12:39:40.037056 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:40.040171 containerd[1570]: time="2025-05-15T12:39:40.040043890Z" level=info msg="CreateContainer within sandbox \"39300750980ff76be582b024c12805e9b33075a340c1c388b4a38073fcee3340\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 12:39:40.044223 systemd[1]: Started cri-containerd-e5962504abc973bbaa782d4785f75bc3ba849082b01f29c13f61b25b62b9ca0b.scope - libcontainer container e5962504abc973bbaa782d4785f75bc3ba849082b01f29c13f61b25b62b9ca0b. May 15 12:39:40.054991 containerd[1570]: time="2025-05-15T12:39:40.054535306Z" level=info msg="Container c43b2dd0ff4d05dceb0b7dcdb0fdac15cd20b7527dae6d015ee38484fb407a35: CDI devices from CRI Config.CDIDevices: []" May 15 12:39:40.066505 containerd[1570]: time="2025-05-15T12:39:40.066130806Z" level=info msg="CreateContainer within sandbox \"39300750980ff76be582b024c12805e9b33075a340c1c388b4a38073fcee3340\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c43b2dd0ff4d05dceb0b7dcdb0fdac15cd20b7527dae6d015ee38484fb407a35\"" May 15 12:39:40.071843 containerd[1570]: time="2025-05-15T12:39:40.071823974Z" level=info msg="StartContainer for \"c43b2dd0ff4d05dceb0b7dcdb0fdac15cd20b7527dae6d015ee38484fb407a35\"" May 15 12:39:40.073178 containerd[1570]: time="2025-05-15T12:39:40.073157823Z" level=info msg="connecting to shim c43b2dd0ff4d05dceb0b7dcdb0fdac15cd20b7527dae6d015ee38484fb407a35" address="unix:///run/containerd/s/99f76f6a471aa841fa57dbca03f5c1aca5a45cab88ec6e893002eec77c6de669" protocol=ttrpc version=3 May 15 12:39:40.076112 systemd[1]: Started cri-containerd-9ec4c4ecf29cf3b1ee71b88eec5ee3667282322279e2572b736aa68481ed2b12.scope - libcontainer container 9ec4c4ecf29cf3b1ee71b88eec5ee3667282322279e2572b736aa68481ed2b12. May 15 12:39:40.291565 kubelet[2462]: E0515 12:39:40.290048 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.125.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-125-189?timeout=10s\": dial tcp 172.236.125.189:6443: connect: connection refused" interval="1.6s" May 15 12:39:40.291565 kubelet[2462]: W0515 12:39:40.290135 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.125.189:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-125-189&limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:40.291565 kubelet[2462]: E0515 12:39:40.290180 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.236.125.189:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-125-189&limit=500&resourceVersion=0": dial tcp 172.236.125.189:6443: connect: connection refused May 15 12:39:40.296490 kubelet[2462]: I0515 12:39:40.296406 2462 kubelet_node_status.go:73] "Attempting to register node" node="172-236-125-189" May 15 12:39:40.296702 kubelet[2462]: E0515 12:39:40.296637 2462 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.236.125.189:6443/api/v1/nodes\": dial tcp 172.236.125.189:6443: connect: connection refused" node="172-236-125-189" May 15 12:39:40.333127 systemd[1]: Started cri-containerd-c43b2dd0ff4d05dceb0b7dcdb0fdac15cd20b7527dae6d015ee38484fb407a35.scope - libcontainer container c43b2dd0ff4d05dceb0b7dcdb0fdac15cd20b7527dae6d015ee38484fb407a35. May 15 12:39:40.382612 containerd[1570]: time="2025-05-15T12:39:40.382571584Z" level=info msg="StartContainer for \"e5962504abc973bbaa782d4785f75bc3ba849082b01f29c13f61b25b62b9ca0b\" returns successfully" May 15 12:39:40.385237 containerd[1570]: time="2025-05-15T12:39:40.385218321Z" level=info msg="StartContainer for \"9ec4c4ecf29cf3b1ee71b88eec5ee3667282322279e2572b736aa68481ed2b12\" returns successfully" May 15 12:39:40.659576 containerd[1570]: time="2025-05-15T12:39:40.659493887Z" level=info msg="StartContainer for \"c43b2dd0ff4d05dceb0b7dcdb0fdac15cd20b7527dae6d015ee38484fb407a35\" returns successfully" May 15 12:39:40.749728 kubelet[2462]: E0515 12:39:40.749683 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:40.753480 kubelet[2462]: E0515 12:39:40.753461 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:40.755865 kubelet[2462]: E0515 12:39:40.755806 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:40.986924 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 15 12:39:41.760052 kubelet[2462]: E0515 12:39:41.759952 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:41.918442 kubelet[2462]: I0515 12:39:41.918393 2462 kubelet_node_status.go:73] "Attempting to register node" node="172-236-125-189" May 15 12:39:42.729930 kubelet[2462]: E0515 12:39:42.729811 2462 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-125-189\" not found" node="172-236-125-189" May 15 12:39:42.822721 kubelet[2462]: I0515 12:39:42.822668 2462 kubelet_node_status.go:76] "Successfully registered node" node="172-236-125-189" May 15 12:39:42.836593 kubelet[2462]: E0515 12:39:42.836552 2462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-236-125-189\" not found" May 15 12:39:43.571847 kubelet[2462]: E0515 12:39:43.571788 2462 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-125-189\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-125-189" May 15 12:39:43.572713 kubelet[2462]: E0515 12:39:43.572326 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:43.738378 kubelet[2462]: I0515 12:39:43.738326 2462 apiserver.go:52] "Watching apiserver" May 15 12:39:43.786726 kubelet[2462]: I0515 12:39:43.786676 2462 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 12:39:43.824901 kubelet[2462]: E0515 12:39:43.824534 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:44.765571 kubelet[2462]: E0515 12:39:44.765503 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:45.076317 systemd[1]: Reload requested from client PID 2736 ('systemctl') (unit session-7.scope)... May 15 12:39:45.076340 systemd[1]: Reloading... May 15 12:39:45.293052 zram_generator::config[2779]: No configuration found. May 15 12:39:45.384825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:39:45.504112 systemd[1]: Reloading finished in 427 ms. May 15 12:39:45.529803 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:39:45.539746 systemd[1]: kubelet.service: Deactivated successfully. May 15 12:39:45.540018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:39:45.540058 systemd[1]: kubelet.service: Consumed 1.000s CPU time, 115.4M memory peak. May 15 12:39:45.544686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:39:45.752849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:39:45.759359 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:39:45.828922 kubelet[2830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:39:45.828922 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 12:39:45.828922 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:39:45.828922 kubelet[2830]: I0515 12:39:45.828596 2830 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:39:45.834285 kubelet[2830]: I0515 12:39:45.834253 2830 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 12:39:45.834285 kubelet[2830]: I0515 12:39:45.834272 2830 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:39:45.834451 kubelet[2830]: I0515 12:39:45.834427 2830 server.go:927] "Client rotation is on, will bootstrap in background" May 15 12:39:45.835543 kubelet[2830]: I0515 12:39:45.835519 2830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 12:39:45.836997 kubelet[2830]: I0515 12:39:45.836662 2830 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:39:45.849211 kubelet[2830]: I0515 12:39:45.849179 2830 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:39:45.849524 kubelet[2830]: I0515 12:39:45.849493 2830 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:39:45.849679 kubelet[2830]: I0515 12:39:45.849523 2830 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-125-189","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 12:39:45.849778 kubelet[2830]: I0515 12:39:45.849710 2830 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:39:45.849778 kubelet[2830]: I0515 12:39:45.849725 2830 container_manager_linux.go:301] "Creating device plugin manager" May 15 12:39:45.849846 kubelet[2830]: I0515 12:39:45.849809 2830 state_mem.go:36] "Initialized new in-memory state store" May 15 12:39:45.849942 kubelet[2830]: I0515 12:39:45.849924 2830 kubelet.go:400] "Attempting to sync node with API server" May 15 12:39:45.849942 kubelet[2830]: I0515 12:39:45.849942 2830 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:39:45.850415 kubelet[2830]: I0515 12:39:45.850326 2830 kubelet.go:312] "Adding apiserver pod source" May 15 12:39:45.850415 kubelet[2830]: I0515 12:39:45.850361 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:39:45.857511 kubelet[2830]: I0515 12:39:45.857104 2830 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:39:45.857511 kubelet[2830]: I0515 12:39:45.857369 2830 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:39:45.858026 kubelet[2830]: I0515 12:39:45.857917 2830 server.go:1264] "Started kubelet" May 15 12:39:45.866847 kubelet[2830]: I0515 12:39:45.866819 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:39:45.867352 kubelet[2830]: I0515 12:39:45.867289 2830 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:39:45.868394 kubelet[2830]: I0515 12:39:45.868364 2830 server.go:455] "Adding debug handlers to kubelet server" May 15 12:39:45.869461 kubelet[2830]: I0515 12:39:45.869447 2830 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 12:39:45.870002 kubelet[2830]: I0515 12:39:45.869872 2830 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 12:39:45.870172 kubelet[2830]: I0515 12:39:45.870158 2830 reconciler.go:26] "Reconciler: start to sync state" May 15 12:39:45.870261 kubelet[2830]: I0515 12:39:45.870210 2830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:39:45.870561 kubelet[2830]: I0515 12:39:45.870429 2830 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:39:45.882710 kubelet[2830]: I0515 12:39:45.882604 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:39:45.883776 kubelet[2830]: I0515 12:39:45.883758 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:39:45.883873 kubelet[2830]: I0515 12:39:45.883863 2830 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 12:39:45.883942 kubelet[2830]: I0515 12:39:45.883933 2830 kubelet.go:2337] "Starting kubelet main sync loop" May 15 12:39:45.884081 kubelet[2830]: E0515 12:39:45.884064 2830 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:39:45.890581 kubelet[2830]: I0515 12:39:45.890541 2830 factory.go:221] Registration of the systemd container factory successfully May 15 12:39:45.891805 kubelet[2830]: I0515 12:39:45.890683 2830 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:39:45.896580 kubelet[2830]: I0515 12:39:45.896553 2830 factory.go:221] Registration of the containerd container factory successfully May 15 12:39:45.897292 kubelet[2830]: E0515 12:39:45.897273 2830 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:39:45.966405 kubelet[2830]: I0515 12:39:45.966358 2830 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 12:39:45.966602 kubelet[2830]: I0515 12:39:45.966561 2830 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 12:39:45.966684 kubelet[2830]: I0515 12:39:45.966673 2830 state_mem.go:36] "Initialized new in-memory state store" May 15 12:39:45.966889 kubelet[2830]: I0515 12:39:45.966875 2830 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 12:39:45.966961 kubelet[2830]: I0515 12:39:45.966939 2830 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 12:39:45.967049 kubelet[2830]: I0515 12:39:45.967040 2830 policy_none.go:49] "None policy: Start" May 15 12:39:45.967938 kubelet[2830]: I0515 12:39:45.967904 2830 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 12:39:45.968001 kubelet[2830]: I0515 12:39:45.967947 2830 state_mem.go:35] "Initializing new in-memory state store" May 15 12:39:45.968192 kubelet[2830]: I0515 12:39:45.968162 2830 state_mem.go:75] "Updated machine memory state" May 15 12:39:45.973433 kubelet[2830]: I0515 12:39:45.973405 2830 kubelet_node_status.go:73] "Attempting to register node" node="172-236-125-189" May 15 12:39:45.973937 kubelet[2830]: I0515 12:39:45.973912 2830 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:39:45.974170 kubelet[2830]: I0515 12:39:45.974117 2830 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:39:45.974253 kubelet[2830]: I0515 12:39:45.974230 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:39:45.984924 kubelet[2830]: I0515 12:39:45.984603 2830 topology_manager.go:215] "Topology Admit Handler" podUID="d767d1f2c34fa8c80517b1e96d67d267" podNamespace="kube-system" podName="kube-scheduler-172-236-125-189" May 15 12:39:45.984924 kubelet[2830]: I0515 12:39:45.984692 2830 topology_manager.go:215] "Topology Admit Handler" podUID="d1772abc87cee22384d33e8e74940300" podNamespace="kube-system" podName="kube-apiserver-172-236-125-189" May 15 12:39:45.984924 kubelet[2830]: I0515 12:39:45.984739 2830 topology_manager.go:215] "Topology Admit Handler" podUID="af4e732f44e9f04ff273825713f205f7" podNamespace="kube-system" podName="kube-controller-manager-172-236-125-189" May 15 12:39:45.986881 kubelet[2830]: I0515 12:39:45.986864 2830 kubelet_node_status.go:112] "Node was previously registered" node="172-236-125-189" May 15 12:39:45.987321 kubelet[2830]: I0515 12:39:45.987215 2830 kubelet_node_status.go:76] "Successfully registered node" node="172-236-125-189" May 15 12:39:45.998155 kubelet[2830]: E0515 12:39:45.998043 2830 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-172-236-125-189\" already exists" pod="kube-system/kube-scheduler-172-236-125-189" May 15 12:39:46.072089 kubelet[2830]: I0515 12:39:46.071602 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1772abc87cee22384d33e8e74940300-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-125-189\" (UID: \"d1772abc87cee22384d33e8e74940300\") " pod="kube-system/kube-apiserver-172-236-125-189" May 15 12:39:46.072089 kubelet[2830]: I0515 12:39:46.071652 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-ca-certs\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:46.072089 kubelet[2830]: I0515 12:39:46.071681 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-flexvolume-dir\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:46.072089 kubelet[2830]: I0515 12:39:46.071706 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-k8s-certs\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:46.072089 kubelet[2830]: I0515 12:39:46.071732 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-kubeconfig\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:46.073057 kubelet[2830]: I0515 12:39:46.071751 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d767d1f2c34fa8c80517b1e96d67d267-kubeconfig\") pod \"kube-scheduler-172-236-125-189\" (UID: \"d767d1f2c34fa8c80517b1e96d67d267\") " pod="kube-system/kube-scheduler-172-236-125-189" May 15 12:39:46.073057 kubelet[2830]: I0515 12:39:46.071775 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1772abc87cee22384d33e8e74940300-ca-certs\") pod \"kube-apiserver-172-236-125-189\" (UID: \"d1772abc87cee22384d33e8e74940300\") " pod="kube-system/kube-apiserver-172-236-125-189" May 15 12:39:46.073057 kubelet[2830]: I0515 12:39:46.071796 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af4e732f44e9f04ff273825713f205f7-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-125-189\" (UID: \"af4e732f44e9f04ff273825713f205f7\") " pod="kube-system/kube-controller-manager-172-236-125-189" May 15 12:39:46.073057 kubelet[2830]: I0515 12:39:46.071812 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1772abc87cee22384d33e8e74940300-k8s-certs\") pod \"kube-apiserver-172-236-125-189\" (UID: \"d1772abc87cee22384d33e8e74940300\") " pod="kube-system/kube-apiserver-172-236-125-189" May 15 12:39:46.299458 kubelet[2830]: E0515 12:39:46.299417 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:46.301057 kubelet[2830]: E0515 12:39:46.299907 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:46.301057 kubelet[2830]: E0515 12:39:46.300245 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:46.855913 kubelet[2830]: I0515 12:39:46.855485 2830 apiserver.go:52] "Watching apiserver" May 15 12:39:46.870787 kubelet[2830]: I0515 12:39:46.870743 2830 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 12:39:46.939787 kubelet[2830]: E0515 12:39:46.936185 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:46.939787 kubelet[2830]: E0515 12:39:46.936341 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:46.951826 kubelet[2830]: E0515 12:39:46.951378 2830 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-125-189\" already exists" pod="kube-system/kube-apiserver-172-236-125-189" May 15 12:39:46.951826 kubelet[2830]: E0515 12:39:46.951691 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:46.997819 kubelet[2830]: I0515 12:39:46.997779 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-125-189" podStartSLOduration=1.9977644460000001 podStartE2EDuration="1.997764446s" podCreationTimestamp="2025-05-15 12:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:39:46.980308027 +0000 UTC m=+1.217546901" watchObservedRunningTime="2025-05-15 12:39:46.997764446 +0000 UTC m=+1.235003320" May 15 12:39:46.998097 kubelet[2830]: I0515 12:39:46.998064 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-125-189" podStartSLOduration=3.998058668 podStartE2EDuration="3.998058668s" podCreationTimestamp="2025-05-15 12:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:39:46.997679824 +0000 UTC m=+1.234918698" watchObservedRunningTime="2025-05-15 12:39:46.998058668 +0000 UTC m=+1.235297542" May 15 12:39:47.936763 kubelet[2830]: E0515 12:39:47.936477 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:52.051422 kubelet[2830]: E0515 12:39:52.051043 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:52.065247 kubelet[2830]: I0515 12:39:52.065193 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-125-189" podStartSLOduration=7.065154582 podStartE2EDuration="7.065154582s" podCreationTimestamp="2025-05-15 12:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:39:47.011228467 +0000 UTC m=+1.248467351" watchObservedRunningTime="2025-05-15 12:39:52.065154582 +0000 UTC m=+6.302393466" May 15 12:39:52.351755 sudo[1806]: pam_unix(sudo:session): session closed for user root May 15 12:39:52.403765 sshd[1805]: Connection closed by 139.178.89.65 port 51786 May 15 12:39:52.404386 sshd-session[1801]: pam_unix(sshd:session): session closed for user core May 15 12:39:52.407957 systemd[1]: sshd@6-172.236.125.189:22-139.178.89.65:51786.service: Deactivated successfully. May 15 12:39:52.410879 systemd[1]: session-7.scope: Deactivated successfully. May 15 12:39:52.411402 systemd[1]: session-7.scope: Consumed 5.545s CPU time, 250.9M memory peak. May 15 12:39:52.413527 systemd-logind[1541]: Session 7 logged out. Waiting for processes to exit. May 15 12:39:52.415427 systemd-logind[1541]: Removed session 7. May 15 12:39:52.944621 kubelet[2830]: E0515 12:39:52.944340 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:55.387995 kubelet[2830]: E0515 12:39:55.387741 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:55.633881 kubelet[2830]: E0515 12:39:55.633798 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:55.748435 update_engine[1542]: I20250515 12:39:55.748302 1542 update_attempter.cc:509] Updating boot flags... May 15 12:39:55.948889 kubelet[2830]: E0515 12:39:55.948473 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:55.949004 kubelet[2830]: E0515 12:39:55.948990 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:39:59.970183 kubelet[2830]: I0515 12:39:59.970144 2830 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 12:39:59.971599 containerd[1570]: time="2025-05-15T12:39:59.971550364Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 12:39:59.971928 kubelet[2830]: I0515 12:39:59.971899 2830 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 12:39:59.989869 kubelet[2830]: I0515 12:39:59.989261 2830 topology_manager.go:215] "Topology Admit Handler" podUID="b6cce3a6-2d28-41f4-97b4-5ab19713b8f7" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-mwhds" May 15 12:39:59.998931 systemd[1]: Created slice kubepods-besteffort-podb6cce3a6_2d28_41f4_97b4_5ab19713b8f7.slice - libcontainer container kubepods-besteffort-podb6cce3a6_2d28_41f4_97b4_5ab19713b8f7.slice. May 15 12:40:00.008134 kubelet[2830]: W0515 12:40:00.008075 2830 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-236-125-189" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-236-125-189' and this object May 15 12:40:00.008134 kubelet[2830]: E0515 12:40:00.008112 2830 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-236-125-189" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-236-125-189' and this object May 15 12:40:00.008459 kubelet[2830]: W0515 12:40:00.008309 2830 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:172-236-125-189" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-236-125-189' and this object May 15 12:40:00.008459 kubelet[2830]: E0515 12:40:00.008335 2830 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:172-236-125-189" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-236-125-189' and this object May 15 12:40:00.044220 kubelet[2830]: I0515 12:40:00.044177 2830 topology_manager.go:215] "Topology Admit Handler" podUID="0b520223-a617-4965-ad70-87aded7b6a11" podNamespace="kube-system" podName="kube-proxy-wpxsb" May 15 12:40:00.055794 systemd[1]: Created slice kubepods-besteffort-pod0b520223_a617_4965_ad70_87aded7b6a11.slice - libcontainer container kubepods-besteffort-pod0b520223_a617_4965_ad70_87aded7b6a11.slice. May 15 12:40:00.074078 kubelet[2830]: W0515 12:40:00.073791 2830 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:172-236-125-189" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-236-125-189' and this object May 15 12:40:00.074078 kubelet[2830]: E0515 12:40:00.073828 2830 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:172-236-125-189" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-236-125-189' and this object May 15 12:40:00.074078 kubelet[2830]: W0515 12:40:00.073867 2830 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-236-125-189" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-236-125-189' and this object May 15 12:40:00.074078 kubelet[2830]: E0515 12:40:00.073878 2830 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-236-125-189" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-236-125-189' and this object May 15 12:40:00.135366 kubelet[2830]: I0515 12:40:00.135219 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx8m9\" (UniqueName: \"kubernetes.io/projected/0b520223-a617-4965-ad70-87aded7b6a11-kube-api-access-wx8m9\") pod \"kube-proxy-wpxsb\" (UID: \"0b520223-a617-4965-ad70-87aded7b6a11\") " pod="kube-system/kube-proxy-wpxsb" May 15 12:40:00.135366 kubelet[2830]: I0515 12:40:00.135299 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b520223-a617-4965-ad70-87aded7b6a11-kube-proxy\") pod \"kube-proxy-wpxsb\" (UID: \"0b520223-a617-4965-ad70-87aded7b6a11\") " pod="kube-system/kube-proxy-wpxsb" May 15 12:40:00.135366 kubelet[2830]: I0515 12:40:00.135349 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b520223-a617-4965-ad70-87aded7b6a11-lib-modules\") pod \"kube-proxy-wpxsb\" (UID: \"0b520223-a617-4965-ad70-87aded7b6a11\") " pod="kube-system/kube-proxy-wpxsb" May 15 12:40:00.135366 kubelet[2830]: I0515 12:40:00.135366 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b520223-a617-4965-ad70-87aded7b6a11-xtables-lock\") pod \"kube-proxy-wpxsb\" (UID: \"0b520223-a617-4965-ad70-87aded7b6a11\") " pod="kube-system/kube-proxy-wpxsb" May 15 12:40:00.135366 kubelet[2830]: I0515 12:40:00.135394 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6cce3a6-2d28-41f4-97b4-5ab19713b8f7-var-lib-calico\") pod \"tigera-operator-797db67f8-mwhds\" (UID: \"b6cce3a6-2d28-41f4-97b4-5ab19713b8f7\") " pod="tigera-operator/tigera-operator-797db67f8-mwhds" May 15 12:40:00.135771 kubelet[2830]: I0515 12:40:00.135417 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl7wb\" (UniqueName: \"kubernetes.io/projected/b6cce3a6-2d28-41f4-97b4-5ab19713b8f7-kube-api-access-gl7wb\") pod \"tigera-operator-797db67f8-mwhds\" (UID: \"b6cce3a6-2d28-41f4-97b4-5ab19713b8f7\") " pod="tigera-operator/tigera-operator-797db67f8-mwhds" May 15 12:40:01.245175 kubelet[2830]: E0515 12:40:01.245132 2830 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 15 12:40:01.245175 kubelet[2830]: E0515 12:40:01.245171 2830 projected.go:200] Error preparing data for projected volume kube-api-access-gl7wb for pod tigera-operator/tigera-operator-797db67f8-mwhds: failed to sync configmap cache: timed out waiting for the condition May 15 12:40:01.246539 kubelet[2830]: E0515 12:40:01.245252 2830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6cce3a6-2d28-41f4-97b4-5ab19713b8f7-kube-api-access-gl7wb podName:b6cce3a6-2d28-41f4-97b4-5ab19713b8f7 nodeName:}" failed. No retries permitted until 2025-05-15 12:40:01.745233174 +0000 UTC m=+15.982472048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gl7wb" (UniqueName: "kubernetes.io/projected/b6cce3a6-2d28-41f4-97b4-5ab19713b8f7-kube-api-access-gl7wb") pod "tigera-operator-797db67f8-mwhds" (UID: "b6cce3a6-2d28-41f4-97b4-5ab19713b8f7") : failed to sync configmap cache: timed out waiting for the condition May 15 12:40:01.259267 kubelet[2830]: E0515 12:40:01.258930 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:01.259591 containerd[1570]: time="2025-05-15T12:40:01.259551902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpxsb,Uid:0b520223-a617-4965-ad70-87aded7b6a11,Namespace:kube-system,Attempt:0,}" May 15 12:40:01.280005 containerd[1570]: time="2025-05-15T12:40:01.278847695Z" level=info msg="connecting to shim 8e4277a7312486f669c8f2fc1c989aa07d449c226a9b0a5a152cc6c13675fe42" address="unix:///run/containerd/s/206dd2db1ede6fc4ca99b289a65630837bb874724e318b239a04fda8260a7043" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:01.313110 systemd[1]: Started cri-containerd-8e4277a7312486f669c8f2fc1c989aa07d449c226a9b0a5a152cc6c13675fe42.scope - libcontainer container 8e4277a7312486f669c8f2fc1c989aa07d449c226a9b0a5a152cc6c13675fe42. May 15 12:40:01.345465 containerd[1570]: time="2025-05-15T12:40:01.345358578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpxsb,Uid:0b520223-a617-4965-ad70-87aded7b6a11,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4277a7312486f669c8f2fc1c989aa07d449c226a9b0a5a152cc6c13675fe42\"" May 15 12:40:01.346158 kubelet[2830]: E0515 12:40:01.346134 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:01.348390 containerd[1570]: time="2025-05-15T12:40:01.348368332Z" level=info msg="CreateContainer within sandbox \"8e4277a7312486f669c8f2fc1c989aa07d449c226a9b0a5a152cc6c13675fe42\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 12:40:01.363999 containerd[1570]: time="2025-05-15T12:40:01.363862701Z" level=info msg="Container 33e2a3b7e29ddaa1b34cdd5852b716f830403f62e1daf5d2cbbdd943c8d0099e: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:01.369010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052053223.mount: Deactivated successfully. May 15 12:40:01.372664 containerd[1570]: time="2025-05-15T12:40:01.372631037Z" level=info msg="CreateContainer within sandbox \"8e4277a7312486f669c8f2fc1c989aa07d449c226a9b0a5a152cc6c13675fe42\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"33e2a3b7e29ddaa1b34cdd5852b716f830403f62e1daf5d2cbbdd943c8d0099e\"" May 15 12:40:01.374036 containerd[1570]: time="2025-05-15T12:40:01.373899920Z" level=info msg="StartContainer for \"33e2a3b7e29ddaa1b34cdd5852b716f830403f62e1daf5d2cbbdd943c8d0099e\"" May 15 12:40:01.375718 containerd[1570]: time="2025-05-15T12:40:01.375697867Z" level=info msg="connecting to shim 33e2a3b7e29ddaa1b34cdd5852b716f830403f62e1daf5d2cbbdd943c8d0099e" address="unix:///run/containerd/s/206dd2db1ede6fc4ca99b289a65630837bb874724e318b239a04fda8260a7043" protocol=ttrpc version=3 May 15 12:40:01.397094 systemd[1]: Started cri-containerd-33e2a3b7e29ddaa1b34cdd5852b716f830403f62e1daf5d2cbbdd943c8d0099e.scope - libcontainer container 33e2a3b7e29ddaa1b34cdd5852b716f830403f62e1daf5d2cbbdd943c8d0099e. May 15 12:40:01.455934 containerd[1570]: time="2025-05-15T12:40:01.455821114Z" level=info msg="StartContainer for \"33e2a3b7e29ddaa1b34cdd5852b716f830403f62e1daf5d2cbbdd943c8d0099e\" returns successfully" May 15 12:40:01.809680 containerd[1570]: time="2025-05-15T12:40:01.809619726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-mwhds,Uid:b6cce3a6-2d28-41f4-97b4-5ab19713b8f7,Namespace:tigera-operator,Attempt:0,}" May 15 12:40:01.857065 containerd[1570]: time="2025-05-15T12:40:01.856914101Z" level=info msg="connecting to shim 32010158f204e015342869c6491eb79c122d5bd2e59de4e487a1fa1e8f104b28" address="unix:///run/containerd/s/4f65759d2648375a17e9a26ec95c30b4469a5cebdc02f6cdb68d35cdb8368c1e" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:01.900093 systemd[1]: Started cri-containerd-32010158f204e015342869c6491eb79c122d5bd2e59de4e487a1fa1e8f104b28.scope - libcontainer container 32010158f204e015342869c6491eb79c122d5bd2e59de4e487a1fa1e8f104b28. May 15 12:40:01.963293 kubelet[2830]: E0515 12:40:01.963257 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:01.966287 containerd[1570]: time="2025-05-15T12:40:01.966207879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-mwhds,Uid:b6cce3a6-2d28-41f4-97b4-5ab19713b8f7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"32010158f204e015342869c6491eb79c122d5bd2e59de4e487a1fa1e8f104b28\"" May 15 12:40:01.969605 containerd[1570]: time="2025-05-15T12:40:01.969535674Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 12:40:01.975948 kubelet[2830]: I0515 12:40:01.975787 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wpxsb" podStartSLOduration=2.975770838 podStartE2EDuration="2.975770838s" podCreationTimestamp="2025-05-15 12:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:40:01.975515915 +0000 UTC m=+16.212754789" watchObservedRunningTime="2025-05-15 12:40:01.975770838 +0000 UTC m=+16.213009712" May 15 12:40:03.247895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536823040.mount: Deactivated successfully. May 15 12:40:04.356063 containerd[1570]: time="2025-05-15T12:40:04.356005305Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:04.356812 containerd[1570]: time="2025-05-15T12:40:04.356714950Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 12:40:04.357360 containerd[1570]: time="2025-05-15T12:40:04.357330959Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:04.358885 containerd[1570]: time="2025-05-15T12:40:04.358863432Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:04.359496 containerd[1570]: time="2025-05-15T12:40:04.359460744Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.389897539s" May 15 12:40:04.359539 containerd[1570]: time="2025-05-15T12:40:04.359495778Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 12:40:04.362828 containerd[1570]: time="2025-05-15T12:40:04.362781651Z" level=info msg="CreateContainer within sandbox \"32010158f204e015342869c6491eb79c122d5bd2e59de4e487a1fa1e8f104b28\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 12:40:04.370532 containerd[1570]: time="2025-05-15T12:40:04.370496681Z" level=info msg="Container 702febf99e532741c5b757158d6fdb43df081ab45ab5f85883f75fd5e2424de0: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:04.377131 containerd[1570]: time="2025-05-15T12:40:04.377104042Z" level=info msg="CreateContainer within sandbox \"32010158f204e015342869c6491eb79c122d5bd2e59de4e487a1fa1e8f104b28\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"702febf99e532741c5b757158d6fdb43df081ab45ab5f85883f75fd5e2424de0\"" May 15 12:40:04.377697 containerd[1570]: time="2025-05-15T12:40:04.377645271Z" level=info msg="StartContainer for \"702febf99e532741c5b757158d6fdb43df081ab45ab5f85883f75fd5e2424de0\"" May 15 12:40:04.381325 containerd[1570]: time="2025-05-15T12:40:04.381278169Z" level=info msg="connecting to shim 702febf99e532741c5b757158d6fdb43df081ab45ab5f85883f75fd5e2424de0" address="unix:///run/containerd/s/4f65759d2648375a17e9a26ec95c30b4469a5cebdc02f6cdb68d35cdb8368c1e" protocol=ttrpc version=3 May 15 12:40:04.422421 systemd[1]: Started cri-containerd-702febf99e532741c5b757158d6fdb43df081ab45ab5f85883f75fd5e2424de0.scope - libcontainer container 702febf99e532741c5b757158d6fdb43df081ab45ab5f85883f75fd5e2424de0. May 15 12:40:04.476115 containerd[1570]: time="2025-05-15T12:40:04.476077378Z" level=info msg="StartContainer for \"702febf99e532741c5b757158d6fdb43df081ab45ab5f85883f75fd5e2424de0\" returns successfully" May 15 12:40:04.979346 kubelet[2830]: I0515 12:40:04.979253 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-mwhds" podStartSLOduration=3.58789252 podStartE2EDuration="5.979233354s" podCreationTimestamp="2025-05-15 12:39:59 +0000 UTC" firstStartedPulling="2025-05-15 12:40:01.969083253 +0000 UTC m=+16.206322127" lastFinishedPulling="2025-05-15 12:40:04.360424087 +0000 UTC m=+18.597662961" observedRunningTime="2025-05-15 12:40:04.97842327 +0000 UTC m=+19.215662154" watchObservedRunningTime="2025-05-15 12:40:04.979233354 +0000 UTC m=+19.216472248" May 15 12:40:07.804459 kubelet[2830]: I0515 12:40:07.804306 2830 topology_manager.go:215] "Topology Admit Handler" podUID="ef32b572-5c1d-422f-80de-3b16fb8fb7b4" podNamespace="calico-system" podName="calico-typha-7bbf89b8c8-7dfdh" May 15 12:40:07.812038 systemd[1]: Created slice kubepods-besteffort-podef32b572_5c1d_422f_80de_3b16fb8fb7b4.slice - libcontainer container kubepods-besteffort-podef32b572_5c1d_422f_80de_3b16fb8fb7b4.slice. May 15 12:40:07.887564 kubelet[2830]: I0515 12:40:07.887523 2830 topology_manager.go:215] "Topology Admit Handler" podUID="3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" podNamespace="calico-system" podName="calico-node-rvtg5" May 15 12:40:07.897087 systemd[1]: Created slice kubepods-besteffort-pod3e0fb5f0_ddfc_4022_865c_cb2de4ca62e8.slice - libcontainer container kubepods-besteffort-pod3e0fb5f0_ddfc_4022_865c_cb2de4ca62e8.slice. May 15 12:40:07.982668 kubelet[2830]: I0515 12:40:07.982639 2830 topology_manager.go:215] "Topology Admit Handler" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" podNamespace="calico-system" podName="csi-node-driver-n6z76" May 15 12:40:07.983469 kubelet[2830]: E0515 12:40:07.983401 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:07.992692 kubelet[2830]: I0515 12:40:07.991902 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-node-certs\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.992692 kubelet[2830]: I0515 12:40:07.991932 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-typha-certs\") pod \"calico-typha-7bbf89b8c8-7dfdh\" (UID: \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\") " pod="calico-system/calico-typha-7bbf89b8c8-7dfdh" May 15 12:40:07.992692 kubelet[2830]: I0515 12:40:07.991951 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-var-run-calico\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.992692 kubelet[2830]: I0515 12:40:07.991985 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-xtables-lock\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.992692 kubelet[2830]: I0515 12:40:07.992008 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs7h6\" (UniqueName: \"kubernetes.io/projected/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-kube-api-access-cs7h6\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.992898 kubelet[2830]: I0515 12:40:07.992025 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-bin-dir\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.992898 kubelet[2830]: I0515 12:40:07.992038 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-flexvol-driver-host\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.992898 kubelet[2830]: I0515 12:40:07.992053 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-lib-modules\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.992898 kubelet[2830]: I0515 12:40:07.992075 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-tigera-ca-bundle\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.992898 kubelet[2830]: I0515 12:40:07.992090 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnfz6\" (UniqueName: \"kubernetes.io/projected/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-kube-api-access-wnfz6\") pod \"calico-typha-7bbf89b8c8-7dfdh\" (UID: \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\") " pod="calico-system/calico-typha-7bbf89b8c8-7dfdh" May 15 12:40:07.993047 kubelet[2830]: I0515 12:40:07.992103 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-log-dir\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.993047 kubelet[2830]: I0515 12:40:07.992119 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-tigera-ca-bundle\") pod \"calico-typha-7bbf89b8c8-7dfdh\" (UID: \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\") " pod="calico-system/calico-typha-7bbf89b8c8-7dfdh" May 15 12:40:07.993047 kubelet[2830]: I0515 12:40:07.992132 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-policysync\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.993047 kubelet[2830]: I0515 12:40:07.992148 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-var-lib-calico\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:07.993047 kubelet[2830]: I0515 12:40:07.992163 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-net-dir\") pod \"calico-node-rvtg5\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " pod="calico-system/calico-node-rvtg5" May 15 12:40:08.093090 kubelet[2830]: I0515 12:40:08.092432 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2f1afa6e-6224-473c-8d91-9f8e0eedd57e-varrun\") pod \"csi-node-driver-n6z76\" (UID: \"2f1afa6e-6224-473c-8d91-9f8e0eedd57e\") " pod="calico-system/csi-node-driver-n6z76" May 15 12:40:08.093090 kubelet[2830]: I0515 12:40:08.092517 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2f1afa6e-6224-473c-8d91-9f8e0eedd57e-registration-dir\") pod \"csi-node-driver-n6z76\" (UID: \"2f1afa6e-6224-473c-8d91-9f8e0eedd57e\") " pod="calico-system/csi-node-driver-n6z76" May 15 12:40:08.093090 kubelet[2830]: I0515 12:40:08.092555 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28m4\" (UniqueName: \"kubernetes.io/projected/2f1afa6e-6224-473c-8d91-9f8e0eedd57e-kube-api-access-w28m4\") pod \"csi-node-driver-n6z76\" (UID: \"2f1afa6e-6224-473c-8d91-9f8e0eedd57e\") " pod="calico-system/csi-node-driver-n6z76" May 15 12:40:08.093090 kubelet[2830]: I0515 12:40:08.092646 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2f1afa6e-6224-473c-8d91-9f8e0eedd57e-socket-dir\") pod \"csi-node-driver-n6z76\" (UID: \"2f1afa6e-6224-473c-8d91-9f8e0eedd57e\") " pod="calico-system/csi-node-driver-n6z76" May 15 12:40:08.093090 kubelet[2830]: I0515 12:40:08.092713 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f1afa6e-6224-473c-8d91-9f8e0eedd57e-kubelet-dir\") pod \"csi-node-driver-n6z76\" (UID: \"2f1afa6e-6224-473c-8d91-9f8e0eedd57e\") " pod="calico-system/csi-node-driver-n6z76" May 15 12:40:08.097413 kubelet[2830]: E0515 12:40:08.097372 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.097705 kubelet[2830]: W0515 12:40:08.097690 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.097814 kubelet[2830]: E0515 12:40:08.097801 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.110325 kubelet[2830]: E0515 12:40:08.110273 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.110325 kubelet[2830]: W0515 12:40:08.110295 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.110325 kubelet[2830]: E0515 12:40:08.110313 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.113567 kubelet[2830]: E0515 12:40:08.113450 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.113567 kubelet[2830]: W0515 12:40:08.113467 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.113567 kubelet[2830]: E0515 12:40:08.113481 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.127186 kubelet[2830]: E0515 12:40:08.127163 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.127186 kubelet[2830]: W0515 12:40:08.127181 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.127929 kubelet[2830]: E0515 12:40:08.127214 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.130084 kubelet[2830]: E0515 12:40:08.130030 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.130084 kubelet[2830]: W0515 12:40:08.130044 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.130084 kubelet[2830]: E0515 12:40:08.130071 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.194806 kubelet[2830]: E0515 12:40:08.194746 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.194991 kubelet[2830]: W0515 12:40:08.194953 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.195117 kubelet[2830]: E0515 12:40:08.195098 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.195584 kubelet[2830]: E0515 12:40:08.195531 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.195718 kubelet[2830]: W0515 12:40:08.195698 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.195901 kubelet[2830]: E0515 12:40:08.195856 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.196280 kubelet[2830]: E0515 12:40:08.196262 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.196280 kubelet[2830]: W0515 12:40:08.196277 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.196482 kubelet[2830]: E0515 12:40:08.196297 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.196795 kubelet[2830]: E0515 12:40:08.196681 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.196795 kubelet[2830]: W0515 12:40:08.196713 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.196795 kubelet[2830]: E0515 12:40:08.196735 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.197348 kubelet[2830]: E0515 12:40:08.197303 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.197522 kubelet[2830]: W0515 12:40:08.197429 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.197522 kubelet[2830]: E0515 12:40:08.197452 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.197866 kubelet[2830]: E0515 12:40:08.197852 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.197866 kubelet[2830]: W0515 12:40:08.197863 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.198149 kubelet[2830]: E0515 12:40:08.197932 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.198371 kubelet[2830]: E0515 12:40:08.198358 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.198371 kubelet[2830]: W0515 12:40:08.198369 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.198574 kubelet[2830]: E0515 12:40:08.198432 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.198779 kubelet[2830]: E0515 12:40:08.198767 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.198779 kubelet[2830]: W0515 12:40:08.198777 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.199005 kubelet[2830]: E0515 12:40:08.198835 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.199046 kubelet[2830]: E0515 12:40:08.199013 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.199046 kubelet[2830]: W0515 12:40:08.199022 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.199306 kubelet[2830]: E0515 12:40:08.199293 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.199625 kubelet[2830]: E0515 12:40:08.199607 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.199625 kubelet[2830]: W0515 12:40:08.199622 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.199849 kubelet[2830]: E0515 12:40:08.199637 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.199878 kubelet[2830]: E0515 12:40:08.199853 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.199878 kubelet[2830]: W0515 12:40:08.199860 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.199878 kubelet[2830]: E0515 12:40:08.199869 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.200309 kubelet[2830]: E0515 12:40:08.200295 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.200309 kubelet[2830]: W0515 12:40:08.200307 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.200491 kubelet[2830]: E0515 12:40:08.200321 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.200726 kubelet[2830]: E0515 12:40:08.200715 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.200868 kubelet[2830]: W0515 12:40:08.200787 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.200868 kubelet[2830]: E0515 12:40:08.200808 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.201109 kubelet[2830]: E0515 12:40:08.201096 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.201397 kubelet[2830]: W0515 12:40:08.201107 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.201554 kubelet[2830]: E0515 12:40:08.201407 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.202545 kubelet[2830]: E0515 12:40:08.202520 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.202545 kubelet[2830]: W0515 12:40:08.202531 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.202908 kubelet[2830]: E0515 12:40:08.202888 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:08.203112 kubelet[2830]: E0515 12:40:08.203037 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.203112 kubelet[2830]: W0515 12:40:08.203046 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.203651 kubelet[2830]: E0515 12:40:08.203631 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.204232 containerd[1570]: time="2025-05-15T12:40:08.204200594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvtg5,Uid:3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8,Namespace:calico-system,Attempt:0,}" May 15 12:40:08.205063 kubelet[2830]: E0515 12:40:08.204747 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.205063 kubelet[2830]: W0515 12:40:08.204761 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.205063 kubelet[2830]: E0515 12:40:08.204796 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.205063 kubelet[2830]: E0515 12:40:08.205006 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.205828 kubelet[2830]: E0515 12:40:08.205455 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.205828 kubelet[2830]: W0515 12:40:08.205464 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.205828 kubelet[2830]: E0515 12:40:08.205496 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.206257 kubelet[2830]: E0515 12:40:08.205851 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.206257 kubelet[2830]: W0515 12:40:08.206000 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.206257 kubelet[2830]: E0515 12:40:08.206021 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.206953 kubelet[2830]: E0515 12:40:08.206927 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.206953 kubelet[2830]: W0515 12:40:08.206940 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.207728 kubelet[2830]: E0515 12:40:08.207473 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.207728 kubelet[2830]: E0515 12:40:08.207487 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.207728 kubelet[2830]: W0515 12:40:08.207646 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.207937 kubelet[2830]: E0515 12:40:08.207896 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.208666 kubelet[2830]: E0515 12:40:08.208581 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.208666 kubelet[2830]: W0515 12:40:08.208597 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.209083 kubelet[2830]: E0515 12:40:08.208796 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.209422 kubelet[2830]: E0515 12:40:08.209371 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.209701 kubelet[2830]: W0515 12:40:08.209594 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.209701 kubelet[2830]: E0515 12:40:08.209642 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.210454 kubelet[2830]: E0515 12:40:08.210433 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.211018 kubelet[2830]: W0515 12:40:08.210605 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.211018 kubelet[2830]: E0515 12:40:08.210632 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.212487 kubelet[2830]: E0515 12:40:08.212175 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.212487 kubelet[2830]: W0515 12:40:08.212184 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.212487 kubelet[2830]: E0515 12:40:08.212194 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.236601 kubelet[2830]: E0515 12:40:08.236529 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:40:08.236601 kubelet[2830]: W0515 12:40:08.236549 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:40:08.236601 kubelet[2830]: E0515 12:40:08.236570 2830 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:40:08.258871 containerd[1570]: time="2025-05-15T12:40:08.258766841Z" level=info msg="connecting to shim 8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf" address="unix:///run/containerd/s/0881dc722006ea1ef3e033d1c794a14bfd83cf624e57dc7ca492a316d1c8a198" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:08.334122 systemd[1]: Started cri-containerd-8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf.scope - libcontainer container 8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf. May 15 12:40:08.383866 containerd[1570]: time="2025-05-15T12:40:08.382476450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvtg5,Uid:3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\"" May 15 12:40:08.385231 kubelet[2830]: E0515 12:40:08.385167 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:08.386805 containerd[1570]: time="2025-05-15T12:40:08.386750852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 12:40:08.422770 kubelet[2830]: E0515 12:40:08.422590 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:08.425804 containerd[1570]: time="2025-05-15T12:40:08.425757413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bbf89b8c8-7dfdh,Uid:ef32b572-5c1d-422f-80de-3b16fb8fb7b4,Namespace:calico-system,Attempt:0,}" May 15 12:40:08.463554 containerd[1570]: time="2025-05-15T12:40:08.460819830Z" level=info msg="connecting to shim f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a" address="unix:///run/containerd/s/9a90a05de30d955e5bf7d5ef0c3c1e529ab834d81f170aa829b93da4baf429d5" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:08.525171 systemd[1]: Started cri-containerd-f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a.scope - libcontainer container f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a. May 15 12:40:08.605691 containerd[1570]: time="2025-05-15T12:40:08.605629093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bbf89b8c8-7dfdh,Uid:ef32b572-5c1d-422f-80de-3b16fb8fb7b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\"" May 15 12:40:08.607058 kubelet[2830]: E0515 12:40:08.606790 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:09.885006 kubelet[2830]: E0515 12:40:09.884710 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:11.887064 kubelet[2830]: E0515 12:40:11.886048 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:13.887956 kubelet[2830]: E0515 12:40:13.887797 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:15.887618 kubelet[2830]: E0515 12:40:15.887493 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:16.229555 containerd[1570]: time="2025-05-15T12:40:16.229410475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:16.230719 containerd[1570]: time="2025-05-15T12:40:16.230613482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 12:40:16.231271 containerd[1570]: time="2025-05-15T12:40:16.231242997Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:16.232827 containerd[1570]: time="2025-05-15T12:40:16.232799171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:16.233370 containerd[1570]: time="2025-05-15T12:40:16.233345786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 7.846567195s" May 15 12:40:16.233451 containerd[1570]: time="2025-05-15T12:40:16.233436538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 12:40:16.235334 containerd[1570]: time="2025-05-15T12:40:16.235319023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 12:40:16.236186 containerd[1570]: time="2025-05-15T12:40:16.236167543Z" level=info msg="CreateContainer within sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 12:40:16.252015 containerd[1570]: time="2025-05-15T12:40:16.251928614Z" level=info msg="Container de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:16.267299 containerd[1570]: time="2025-05-15T12:40:16.267247956Z" level=info msg="CreateContainer within sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\"" May 15 12:40:16.267883 containerd[1570]: time="2025-05-15T12:40:16.267846374Z" level=info msg="StartContainer for \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\"" May 15 12:40:16.269477 containerd[1570]: time="2025-05-15T12:40:16.269448449Z" level=info msg="connecting to shim de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927" address="unix:///run/containerd/s/0881dc722006ea1ef3e033d1c794a14bfd83cf624e57dc7ca492a316d1c8a198" protocol=ttrpc version=3 May 15 12:40:16.327294 systemd[1]: Started cri-containerd-de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927.scope - libcontainer container de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927. May 15 12:40:16.416156 containerd[1570]: time="2025-05-15T12:40:16.416076018Z" level=info msg="StartContainer for \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\" returns successfully" May 15 12:40:16.438900 systemd[1]: cri-containerd-de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927.scope: Deactivated successfully. May 15 12:40:16.442834 containerd[1570]: time="2025-05-15T12:40:16.442724327Z" level=info msg="received exit event container_id:\"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\" id:\"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\" pid:3367 exited_at:{seconds:1747312816 nanos:442091751}" May 15 12:40:16.442834 containerd[1570]: time="2025-05-15T12:40:16.442797195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\" id:\"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\" pid:3367 exited_at:{seconds:1747312816 nanos:442091751}" May 15 12:40:16.465892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927-rootfs.mount: Deactivated successfully. May 15 12:40:17.010091 kubelet[2830]: E0515 12:40:17.010052 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:17.888017 kubelet[2830]: E0515 12:40:17.885605 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:19.886022 kubelet[2830]: E0515 12:40:19.885083 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:21.685446 containerd[1570]: time="2025-05-15T12:40:21.685290162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:21.686844 containerd[1570]: time="2025-05-15T12:40:21.686227292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 12:40:21.686844 containerd[1570]: time="2025-05-15T12:40:21.686628057Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:21.688112 containerd[1570]: time="2025-05-15T12:40:21.688085999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:21.688775 containerd[1570]: time="2025-05-15T12:40:21.688709892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 5.453290505s" May 15 12:40:21.688775 containerd[1570]: time="2025-05-15T12:40:21.688745900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 12:40:21.691101 containerd[1570]: time="2025-05-15T12:40:21.690546985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 12:40:21.864695 containerd[1570]: time="2025-05-15T12:40:21.864631816Z" level=info msg="CreateContainer within sandbox \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 12:40:21.874993 containerd[1570]: time="2025-05-15T12:40:21.871018721Z" level=info msg="Container ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:21.879990 containerd[1570]: time="2025-05-15T12:40:21.879947509Z" level=info msg="CreateContainer within sandbox \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\"" May 15 12:40:21.880579 containerd[1570]: time="2025-05-15T12:40:21.880551748Z" level=info msg="StartContainer for \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\"" May 15 12:40:21.881958 containerd[1570]: time="2025-05-15T12:40:21.881898896Z" level=info msg="connecting to shim ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a" address="unix:///run/containerd/s/9a90a05de30d955e5bf7d5ef0c3c1e529ab834d81f170aa829b93da4baf429d5" protocol=ttrpc version=3 May 15 12:40:21.887986 kubelet[2830]: E0515 12:40:21.887876 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:21.929190 systemd[1]: Started cri-containerd-ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a.scope - libcontainer container ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a. May 15 12:40:22.004134 containerd[1570]: time="2025-05-15T12:40:22.003946121Z" level=info msg="StartContainer for \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" returns successfully" May 15 12:40:22.028515 kubelet[2830]: E0515 12:40:22.028365 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:22.050806 kubelet[2830]: I0515 12:40:22.049824 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bbf89b8c8-7dfdh" podStartSLOduration=1.969878706 podStartE2EDuration="15.04980187s" podCreationTimestamp="2025-05-15 12:40:07 +0000 UTC" firstStartedPulling="2025-05-15 12:40:08.610180686 +0000 UTC m=+22.847419560" lastFinishedPulling="2025-05-15 12:40:21.69010385 +0000 UTC m=+35.927342724" observedRunningTime="2025-05-15 12:40:22.046986754 +0000 UTC m=+36.284225628" watchObservedRunningTime="2025-05-15 12:40:22.04980187 +0000 UTC m=+36.287040744" May 15 12:40:23.031987 kubelet[2830]: E0515 12:40:23.031825 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:23.886017 kubelet[2830]: E0515 12:40:23.884997 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:24.032961 kubelet[2830]: E0515 12:40:24.032917 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:25.887316 kubelet[2830]: E0515 12:40:25.887040 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:27.892749 kubelet[2830]: E0515 12:40:27.892343 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:29.403471 containerd[1570]: time="2025-05-15T12:40:29.403417325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:29.404494 containerd[1570]: time="2025-05-15T12:40:29.404380087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 12:40:29.404958 containerd[1570]: time="2025-05-15T12:40:29.404931435Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:29.406748 containerd[1570]: time="2025-05-15T12:40:29.406715014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:29.407298 containerd[1570]: time="2025-05-15T12:40:29.407265872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 7.715636246s" May 15 12:40:29.407400 containerd[1570]: time="2025-05-15T12:40:29.407384874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 12:40:29.410510 containerd[1570]: time="2025-05-15T12:40:29.410490448Z" level=info msg="CreateContainer within sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 12:40:29.418424 containerd[1570]: time="2025-05-15T12:40:29.418202045Z" level=info msg="Container 023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:29.427002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967714346.mount: Deactivated successfully. May 15 12:40:29.438532 containerd[1570]: time="2025-05-15T12:40:29.438484827Z" level=info msg="CreateContainer within sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\"" May 15 12:40:29.439124 containerd[1570]: time="2025-05-15T12:40:29.439094235Z" level=info msg="StartContainer for \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\"" May 15 12:40:29.440737 containerd[1570]: time="2025-05-15T12:40:29.440668377Z" level=info msg="connecting to shim 023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39" address="unix:///run/containerd/s/0881dc722006ea1ef3e033d1c794a14bfd83cf624e57dc7ca492a316d1c8a198" protocol=ttrpc version=3 May 15 12:40:29.481238 systemd[1]: Started cri-containerd-023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39.scope - libcontainer container 023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39. May 15 12:40:29.541833 containerd[1570]: time="2025-05-15T12:40:29.541735923Z" level=info msg="StartContainer for \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\" returns successfully" May 15 12:40:29.888701 kubelet[2830]: E0515 12:40:29.888447 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:30.103248 kubelet[2830]: E0515 12:40:30.102512 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:31.095404 kubelet[2830]: E0515 12:40:31.095356 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:31.147994 containerd[1570]: time="2025-05-15T12:40:31.147869636Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:40:31.151897 systemd[1]: cri-containerd-023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39.scope: Deactivated successfully. May 15 12:40:31.152328 systemd[1]: cri-containerd-023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39.scope: Consumed 1.658s CPU time, 175.9M memory peak, 154M written to disk. May 15 12:40:31.153747 containerd[1570]: time="2025-05-15T12:40:31.153457168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\" id:\"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\" pid:3466 exited_at:{seconds:1747312831 nanos:152962523}" May 15 12:40:31.153747 containerd[1570]: time="2025-05-15T12:40:31.153562176Z" level=info msg="received exit event container_id:\"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\" id:\"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\" pid:3466 exited_at:{seconds:1747312831 nanos:152962523}" May 15 12:40:31.180363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39-rootfs.mount: Deactivated successfully. May 15 12:40:31.196464 kubelet[2830]: I0515 12:40:31.196172 2830 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 12:40:31.267023 kubelet[2830]: I0515 12:40:31.266117 2830 topology_manager.go:215] "Topology Admit Handler" podUID="d81f736f-2cfe-4dd7-8bae-39e5d7b0171c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vnsrk" May 15 12:40:31.269219 kubelet[2830]: I0515 12:40:31.269174 2830 topology_manager.go:215] "Topology Admit Handler" podUID="f6c810d1-56b7-4269-a991-f69aed60bb27" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zltl2" May 15 12:40:31.271000 kubelet[2830]: I0515 12:40:31.269956 2830 topology_manager.go:215] "Topology Admit Handler" podUID="28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9" podNamespace="calico-apiserver" podName="calico-apiserver-78b5784dc8-8lbpp" May 15 12:40:31.279122 systemd[1]: Created slice kubepods-burstable-podd81f736f_2cfe_4dd7_8bae_39e5d7b0171c.slice - libcontainer container kubepods-burstable-podd81f736f_2cfe_4dd7_8bae_39e5d7b0171c.slice. May 15 12:40:31.282034 kubelet[2830]: I0515 12:40:31.281123 2830 topology_manager.go:215] "Topology Admit Handler" podUID="f51c33cf-e651-4159-ba52-866ced1779f7" podNamespace="calico-apiserver" podName="calico-apiserver-78b5784dc8-mxm9v" May 15 12:40:31.287583 kubelet[2830]: I0515 12:40:31.287418 2830 topology_manager.go:215] "Topology Admit Handler" podUID="086a9281-b1a7-45e0-92ca-5dca97c27bd4" podNamespace="calico-apiserver" podName="calico-apiserver-794557d677-skbcb" May 15 12:40:31.290780 kubelet[2830]: I0515 12:40:31.290758 2830 topology_manager.go:215] "Topology Admit Handler" podUID="b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d" podNamespace="calico-system" podName="calico-kube-controllers-b4bb544b7-zbnfw" May 15 12:40:31.291438 systemd[1]: Created slice kubepods-burstable-podf6c810d1_56b7_4269_a991_f69aed60bb27.slice - libcontainer container kubepods-burstable-podf6c810d1_56b7_4269_a991_f69aed60bb27.slice. May 15 12:40:31.306425 systemd[1]: Created slice kubepods-besteffort-pod28308a9a_6a5a_4c07_b05d_23fd8cc4a3e9.slice - libcontainer container kubepods-besteffort-pod28308a9a_6a5a_4c07_b05d_23fd8cc4a3e9.slice. May 15 12:40:31.335417 systemd[1]: Created slice kubepods-besteffort-podf51c33cf_e651_4159_ba52_866ced1779f7.slice - libcontainer container kubepods-besteffort-podf51c33cf_e651_4159_ba52_866ced1779f7.slice. May 15 12:40:31.350366 systemd[1]: Created slice kubepods-besteffort-podb4d5a2a6_1051_40f7_84e4_ce0a66d4b74d.slice - libcontainer container kubepods-besteffort-podb4d5a2a6_1051_40f7_84e4_ce0a66d4b74d.slice. May 15 12:40:31.362382 systemd[1]: Created slice kubepods-besteffort-pod086a9281_b1a7_45e0_92ca_5dca97c27bd4.slice - libcontainer container kubepods-besteffort-pod086a9281_b1a7_45e0_92ca_5dca97c27bd4.slice. May 15 12:40:31.375382 kubelet[2830]: I0515 12:40:31.375323 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6c810d1-56b7-4269-a991-f69aed60bb27-config-volume\") pod \"coredns-7db6d8ff4d-zltl2\" (UID: \"f6c810d1-56b7-4269-a991-f69aed60bb27\") " pod="kube-system/coredns-7db6d8ff4d-zltl2" May 15 12:40:31.375382 kubelet[2830]: I0515 12:40:31.375375 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxjds\" (UniqueName: \"kubernetes.io/projected/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9-kube-api-access-mxjds\") pod \"calico-apiserver-78b5784dc8-8lbpp\" (UID: \"28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9\") " pod="calico-apiserver/calico-apiserver-78b5784dc8-8lbpp" May 15 12:40:31.375661 kubelet[2830]: I0515 12:40:31.375406 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d-tigera-ca-bundle\") pod \"calico-kube-controllers-b4bb544b7-zbnfw\" (UID: \"b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d\") " pod="calico-system/calico-kube-controllers-b4bb544b7-zbnfw" May 15 12:40:31.375661 kubelet[2830]: I0515 12:40:31.375441 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjt4f\" (UniqueName: \"kubernetes.io/projected/f6c810d1-56b7-4269-a991-f69aed60bb27-kube-api-access-xjt4f\") pod \"coredns-7db6d8ff4d-zltl2\" (UID: \"f6c810d1-56b7-4269-a991-f69aed60bb27\") " pod="kube-system/coredns-7db6d8ff4d-zltl2" May 15 12:40:31.375661 kubelet[2830]: I0515 12:40:31.375476 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f51c33cf-e651-4159-ba52-866ced1779f7-calico-apiserver-certs\") pod \"calico-apiserver-78b5784dc8-mxm9v\" (UID: \"f51c33cf-e651-4159-ba52-866ced1779f7\") " pod="calico-apiserver/calico-apiserver-78b5784dc8-mxm9v" May 15 12:40:31.375661 kubelet[2830]: I0515 12:40:31.375502 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqfkv\" (UniqueName: \"kubernetes.io/projected/d81f736f-2cfe-4dd7-8bae-39e5d7b0171c-kube-api-access-bqfkv\") pod \"coredns-7db6d8ff4d-vnsrk\" (UID: \"d81f736f-2cfe-4dd7-8bae-39e5d7b0171c\") " pod="kube-system/coredns-7db6d8ff4d-vnsrk" May 15 12:40:31.375661 kubelet[2830]: I0515 12:40:31.375529 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5ll4\" (UniqueName: \"kubernetes.io/projected/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d-kube-api-access-j5ll4\") pod \"calico-kube-controllers-b4bb544b7-zbnfw\" (UID: \"b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d\") " pod="calico-system/calico-kube-controllers-b4bb544b7-zbnfw" May 15 12:40:31.375806 kubelet[2830]: I0515 12:40:31.375559 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/086a9281-b1a7-45e0-92ca-5dca97c27bd4-calico-apiserver-certs\") pod \"calico-apiserver-794557d677-skbcb\" (UID: \"086a9281-b1a7-45e0-92ca-5dca97c27bd4\") " pod="calico-apiserver/calico-apiserver-794557d677-skbcb" May 15 12:40:31.375806 kubelet[2830]: I0515 12:40:31.375579 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwb77\" (UniqueName: \"kubernetes.io/projected/086a9281-b1a7-45e0-92ca-5dca97c27bd4-kube-api-access-xwb77\") pod \"calico-apiserver-794557d677-skbcb\" (UID: \"086a9281-b1a7-45e0-92ca-5dca97c27bd4\") " pod="calico-apiserver/calico-apiserver-794557d677-skbcb" May 15 12:40:31.375806 kubelet[2830]: I0515 12:40:31.375602 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d81f736f-2cfe-4dd7-8bae-39e5d7b0171c-config-volume\") pod \"coredns-7db6d8ff4d-vnsrk\" (UID: \"d81f736f-2cfe-4dd7-8bae-39e5d7b0171c\") " pod="kube-system/coredns-7db6d8ff4d-vnsrk" May 15 12:40:31.375806 kubelet[2830]: I0515 12:40:31.375622 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9-calico-apiserver-certs\") pod \"calico-apiserver-78b5784dc8-8lbpp\" (UID: \"28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9\") " pod="calico-apiserver/calico-apiserver-78b5784dc8-8lbpp" May 15 12:40:31.375806 kubelet[2830]: I0515 12:40:31.375640 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnjz\" (UniqueName: \"kubernetes.io/projected/f51c33cf-e651-4159-ba52-866ced1779f7-kube-api-access-kwnjz\") pod \"calico-apiserver-78b5784dc8-mxm9v\" (UID: \"f51c33cf-e651-4159-ba52-866ced1779f7\") " pod="calico-apiserver/calico-apiserver-78b5784dc8-mxm9v" May 15 12:40:31.589009 kubelet[2830]: E0515 12:40:31.586335 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:31.590668 containerd[1570]: time="2025-05-15T12:40:31.590606092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vnsrk,Uid:d81f736f-2cfe-4dd7-8bae-39e5d7b0171c,Namespace:kube-system,Attempt:0,}" May 15 12:40:31.601315 kubelet[2830]: E0515 12:40:31.601144 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:31.604554 containerd[1570]: time="2025-05-15T12:40:31.604507437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zltl2,Uid:f6c810d1-56b7-4269-a991-f69aed60bb27,Namespace:kube-system,Attempt:0,}" May 15 12:40:31.623115 containerd[1570]: time="2025-05-15T12:40:31.621683706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b5784dc8-8lbpp,Uid:28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9,Namespace:calico-apiserver,Attempt:0,}" May 15 12:40:31.655325 containerd[1570]: time="2025-05-15T12:40:31.655273762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b5784dc8-mxm9v,Uid:f51c33cf-e651-4159-ba52-866ced1779f7,Namespace:calico-apiserver,Attempt:0,}" May 15 12:40:31.659525 containerd[1570]: time="2025-05-15T12:40:31.659262899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4bb544b7-zbnfw,Uid:b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d,Namespace:calico-system,Attempt:0,}" May 15 12:40:31.681992 containerd[1570]: time="2025-05-15T12:40:31.681385150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794557d677-skbcb,Uid:086a9281-b1a7-45e0-92ca-5dca97c27bd4,Namespace:calico-apiserver,Attempt:0,}" May 15 12:40:31.899805 systemd[1]: Created slice kubepods-besteffort-pod2f1afa6e_6224_473c_8d91_9f8e0eedd57e.slice - libcontainer container kubepods-besteffort-pod2f1afa6e_6224_473c_8d91_9f8e0eedd57e.slice. May 15 12:40:31.908751 containerd[1570]: time="2025-05-15T12:40:31.908714200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n6z76,Uid:2f1afa6e-6224-473c-8d91-9f8e0eedd57e,Namespace:calico-system,Attempt:0,}" May 15 12:40:31.970898 containerd[1570]: time="2025-05-15T12:40:31.970825370Z" level=error msg="Failed to destroy network for sandbox \"69124f5ff5b3261029140b9ff6202a4185433227471c661140633a830350c37d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:31.977431 containerd[1570]: time="2025-05-15T12:40:31.977385770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zltl2,Uid:f6c810d1-56b7-4269-a991-f69aed60bb27,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69124f5ff5b3261029140b9ff6202a4185433227471c661140633a830350c37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:31.978204 kubelet[2830]: E0515 12:40:31.978128 2830 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69124f5ff5b3261029140b9ff6202a4185433227471c661140633a830350c37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:31.979048 kubelet[2830]: E0515 12:40:31.978429 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69124f5ff5b3261029140b9ff6202a4185433227471c661140633a830350c37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zltl2" May 15 12:40:31.979048 kubelet[2830]: E0515 12:40:31.978483 2830 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69124f5ff5b3261029140b9ff6202a4185433227471c661140633a830350c37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zltl2" May 15 12:40:31.979989 kubelet[2830]: E0515 12:40:31.979209 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zltl2_kube-system(f6c810d1-56b7-4269-a991-f69aed60bb27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zltl2_kube-system(f6c810d1-56b7-4269-a991-f69aed60bb27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69124f5ff5b3261029140b9ff6202a4185433227471c661140633a830350c37d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zltl2" podUID="f6c810d1-56b7-4269-a991-f69aed60bb27" May 15 12:40:32.020285 containerd[1570]: time="2025-05-15T12:40:32.020150742Z" level=error msg="Failed to destroy network for sandbox \"4cf0a173f38a614b3f383b192ce55df2ca9053af084dfb881cc14665923aed3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.027072 containerd[1570]: time="2025-05-15T12:40:32.027035468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b5784dc8-mxm9v,Uid:f51c33cf-e651-4159-ba52-866ced1779f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf0a173f38a614b3f383b192ce55df2ca9053af084dfb881cc14665923aed3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.028106 kubelet[2830]: E0515 12:40:32.027518 2830 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf0a173f38a614b3f383b192ce55df2ca9053af084dfb881cc14665923aed3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.028106 kubelet[2830]: E0515 12:40:32.027583 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf0a173f38a614b3f383b192ce55df2ca9053af084dfb881cc14665923aed3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78b5784dc8-mxm9v" May 15 12:40:32.028106 kubelet[2830]: E0515 12:40:32.027610 2830 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf0a173f38a614b3f383b192ce55df2ca9053af084dfb881cc14665923aed3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78b5784dc8-mxm9v" May 15 12:40:32.028271 kubelet[2830]: E0515 12:40:32.028061 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78b5784dc8-mxm9v_calico-apiserver(f51c33cf-e651-4159-ba52-866ced1779f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78b5784dc8-mxm9v_calico-apiserver(f51c33cf-e651-4159-ba52-866ced1779f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cf0a173f38a614b3f383b192ce55df2ca9053af084dfb881cc14665923aed3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78b5784dc8-mxm9v" podUID="f51c33cf-e651-4159-ba52-866ced1779f7" May 15 12:40:32.029360 containerd[1570]: time="2025-05-15T12:40:32.029333748Z" level=error msg="Failed to destroy network for sandbox \"f0901b463fa93f961897255a2619f2a82222c23cfff0a2dc981bbaf837be2623\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.031067 containerd[1570]: time="2025-05-15T12:40:32.031027254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4bb544b7-zbnfw,Uid:b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0901b463fa93f961897255a2619f2a82222c23cfff0a2dc981bbaf837be2623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.031511 kubelet[2830]: E0515 12:40:32.031483 2830 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0901b463fa93f961897255a2619f2a82222c23cfff0a2dc981bbaf837be2623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.031644 kubelet[2830]: E0515 12:40:32.031615 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0901b463fa93f961897255a2619f2a82222c23cfff0a2dc981bbaf837be2623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b4bb544b7-zbnfw" May 15 12:40:32.031714 kubelet[2830]: E0515 12:40:32.031700 2830 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0901b463fa93f961897255a2619f2a82222c23cfff0a2dc981bbaf837be2623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b4bb544b7-zbnfw" May 15 12:40:32.031837 kubelet[2830]: E0515 12:40:32.031782 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b4bb544b7-zbnfw_calico-system(b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b4bb544b7-zbnfw_calico-system(b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0901b463fa93f961897255a2619f2a82222c23cfff0a2dc981bbaf837be2623\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b4bb544b7-zbnfw" podUID="b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d" May 15 12:40:32.049923 containerd[1570]: time="2025-05-15T12:40:32.049862795Z" level=error msg="Failed to destroy network for sandbox \"54c43e7f75a1d4872572a04b7a068c21112411bc33e1c66fc843d972a815f256\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.051167 containerd[1570]: time="2025-05-15T12:40:32.051135021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vnsrk,Uid:d81f736f-2cfe-4dd7-8bae-39e5d7b0171c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54c43e7f75a1d4872572a04b7a068c21112411bc33e1c66fc843d972a815f256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.051452 kubelet[2830]: E0515 12:40:32.051355 2830 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54c43e7f75a1d4872572a04b7a068c21112411bc33e1c66fc843d972a815f256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.051512 kubelet[2830]: E0515 12:40:32.051479 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54c43e7f75a1d4872572a04b7a068c21112411bc33e1c66fc843d972a815f256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vnsrk" May 15 12:40:32.051512 kubelet[2830]: E0515 12:40:32.051500 2830 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54c43e7f75a1d4872572a04b7a068c21112411bc33e1c66fc843d972a815f256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vnsrk" May 15 12:40:32.052582 kubelet[2830]: E0515 12:40:32.051596 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vnsrk_kube-system(d81f736f-2cfe-4dd7-8bae-39e5d7b0171c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vnsrk_kube-system(d81f736f-2cfe-4dd7-8bae-39e5d7b0171c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54c43e7f75a1d4872572a04b7a068c21112411bc33e1c66fc843d972a815f256\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vnsrk" podUID="d81f736f-2cfe-4dd7-8bae-39e5d7b0171c" May 15 12:40:32.104897 kubelet[2830]: E0515 12:40:32.103328 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:32.107687 containerd[1570]: time="2025-05-15T12:40:32.107041591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 12:40:32.142716 containerd[1570]: time="2025-05-15T12:40:32.142590014Z" level=error msg="Failed to destroy network for sandbox \"62fcf97803fe0644f482ec523357fb1dece931ad163525410a6d72d028eac59e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.144385 containerd[1570]: time="2025-05-15T12:40:32.144355983Z" level=error msg="Failed to destroy network for sandbox \"e58446fc05aa30f03231f506cb263ee4414354888af56ac0ec99200c2ce6da5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.144843 containerd[1570]: time="2025-05-15T12:40:32.143955835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b5784dc8-8lbpp,Uid:28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fcf97803fe0644f482ec523357fb1dece931ad163525410a6d72d028eac59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.145359 kubelet[2830]: E0515 12:40:32.145326 2830 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fcf97803fe0644f482ec523357fb1dece931ad163525410a6d72d028eac59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.145499 kubelet[2830]: E0515 12:40:32.145470 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fcf97803fe0644f482ec523357fb1dece931ad163525410a6d72d028eac59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78b5784dc8-8lbpp" May 15 12:40:32.145580 kubelet[2830]: E0515 12:40:32.145564 2830 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fcf97803fe0644f482ec523357fb1dece931ad163525410a6d72d028eac59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78b5784dc8-8lbpp" May 15 12:40:32.145689 kubelet[2830]: E0515 12:40:32.145661 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78b5784dc8-8lbpp_calico-apiserver(28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78b5784dc8-8lbpp_calico-apiserver(28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62fcf97803fe0644f482ec523357fb1dece931ad163525410a6d72d028eac59e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78b5784dc8-8lbpp" podUID="28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9" May 15 12:40:32.147505 containerd[1570]: time="2025-05-15T12:40:32.146962244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794557d677-skbcb,Uid:086a9281-b1a7-45e0-92ca-5dca97c27bd4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58446fc05aa30f03231f506cb263ee4414354888af56ac0ec99200c2ce6da5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.148651 kubelet[2830]: E0515 12:40:32.148591 2830 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58446fc05aa30f03231f506cb263ee4414354888af56ac0ec99200c2ce6da5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.148885 kubelet[2830]: E0515 12:40:32.148867 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58446fc05aa30f03231f506cb263ee4414354888af56ac0ec99200c2ce6da5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794557d677-skbcb" May 15 12:40:32.149006 kubelet[2830]: E0515 12:40:32.148957 2830 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58446fc05aa30f03231f506cb263ee4414354888af56ac0ec99200c2ce6da5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794557d677-skbcb" May 15 12:40:32.149133 kubelet[2830]: E0515 12:40:32.149093 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-794557d677-skbcb_calico-apiserver(086a9281-b1a7-45e0-92ca-5dca97c27bd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-794557d677-skbcb_calico-apiserver(086a9281-b1a7-45e0-92ca-5dca97c27bd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e58446fc05aa30f03231f506cb263ee4414354888af56ac0ec99200c2ce6da5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794557d677-skbcb" podUID="086a9281-b1a7-45e0-92ca-5dca97c27bd4" May 15 12:40:32.155822 containerd[1570]: time="2025-05-15T12:40:32.155712807Z" level=error msg="Failed to destroy network for sandbox \"9495c08848d69fc289df53e3757b3e51b0d0071d0e79ded713d26b2259547264\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.157361 containerd[1570]: time="2025-05-15T12:40:32.157310657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n6z76,Uid:2f1afa6e-6224-473c-8d91-9f8e0eedd57e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9495c08848d69fc289df53e3757b3e51b0d0071d0e79ded713d26b2259547264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.157521 kubelet[2830]: E0515 12:40:32.157499 2830 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9495c08848d69fc289df53e3757b3e51b0d0071d0e79ded713d26b2259547264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:40:32.157617 kubelet[2830]: E0515 12:40:32.157592 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9495c08848d69fc289df53e3757b3e51b0d0071d0e79ded713d26b2259547264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n6z76" May 15 12:40:32.157694 kubelet[2830]: E0515 12:40:32.157620 2830 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9495c08848d69fc289df53e3757b3e51b0d0071d0e79ded713d26b2259547264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n6z76" May 15 12:40:32.157987 kubelet[2830]: E0515 12:40:32.157665 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n6z76_calico-system(2f1afa6e-6224-473c-8d91-9f8e0eedd57e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n6z76_calico-system(2f1afa6e-6224-473c-8d91-9f8e0eedd57e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9495c08848d69fc289df53e3757b3e51b0d0071d0e79ded713d26b2259547264\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n6z76" podUID="2f1afa6e-6224-473c-8d91-9f8e0eedd57e" May 15 12:40:41.371572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395161942.mount: Deactivated successfully. May 15 12:40:41.407820 containerd[1570]: time="2025-05-15T12:40:41.407745031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:41.408727 containerd[1570]: time="2025-05-15T12:40:41.408562944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 12:40:41.409238 containerd[1570]: time="2025-05-15T12:40:41.409205961Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:41.410702 containerd[1570]: time="2025-05-15T12:40:41.410672271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:41.411280 containerd[1570]: time="2025-05-15T12:40:41.411237287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 9.303422644s" May 15 12:40:41.411358 containerd[1570]: time="2025-05-15T12:40:41.411343352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 12:40:41.429069 containerd[1570]: time="2025-05-15T12:40:41.427486683Z" level=info msg="CreateContainer within sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 12:40:41.443053 containerd[1570]: time="2025-05-15T12:40:41.443022561Z" level=info msg="Container f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:41.451456 containerd[1570]: time="2025-05-15T12:40:41.451420746Z" level=info msg="CreateContainer within sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\"" May 15 12:40:41.453163 containerd[1570]: time="2025-05-15T12:40:41.453139964Z" level=info msg="StartContainer for \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\"" May 15 12:40:41.454784 containerd[1570]: time="2025-05-15T12:40:41.454762619Z" level=info msg="connecting to shim f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780" address="unix:///run/containerd/s/0881dc722006ea1ef3e033d1c794a14bfd83cf624e57dc7ca492a316d1c8a198" protocol=ttrpc version=3 May 15 12:40:41.476187 systemd[1]: Started cri-containerd-f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780.scope - libcontainer container f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780. May 15 12:40:41.527806 containerd[1570]: time="2025-05-15T12:40:41.527758787Z" level=info msg="StartContainer for \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" returns successfully" May 15 12:40:41.616193 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 12:40:41.616318 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 12:40:42.163729 kubelet[2830]: E0515 12:40:42.163681 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:42.203325 kubelet[2830]: I0515 12:40:42.203187 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rvtg5" podStartSLOduration=2.175652579 podStartE2EDuration="35.20194431s" podCreationTimestamp="2025-05-15 12:40:07 +0000 UTC" firstStartedPulling="2025-05-15 12:40:08.386388053 +0000 UTC m=+22.623626927" lastFinishedPulling="2025-05-15 12:40:41.412679784 +0000 UTC m=+55.649918658" observedRunningTime="2025-05-15 12:40:42.199846027 +0000 UTC m=+56.437084901" watchObservedRunningTime="2025-05-15 12:40:42.20194431 +0000 UTC m=+56.439183184" May 15 12:40:42.252678 containerd[1570]: time="2025-05-15T12:40:42.252623263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" id:\"896d3e204883097916ea42bead4b166708df254e14d09db26fc50d6c623b4d6c\" pid:3817 exit_status:1 exited_at:{seconds:1747312842 nanos:251712907}" May 15 12:40:42.885752 kubelet[2830]: E0515 12:40:42.885685 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:42.886879 containerd[1570]: time="2025-05-15T12:40:42.886763240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zltl2,Uid:f6c810d1-56b7-4269-a991-f69aed60bb27,Namespace:kube-system,Attempt:0,}" May 15 12:40:43.146465 systemd-networkd[1466]: cali167fdde3099: Link UP May 15 12:40:43.150313 systemd-networkd[1466]: cali167fdde3099: Gained carrier May 15 12:40:43.174811 containerd[1570]: 2025-05-15 12:40:42.938 [INFO][3829] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 12:40:43.174811 containerd[1570]: 2025-05-15 12:40:42.963 [INFO][3829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0 coredns-7db6d8ff4d- kube-system f6c810d1-56b7-4269-a991-f69aed60bb27 819 0 2025-05-15 12:39:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-125-189 coredns-7db6d8ff4d-zltl2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali167fdde3099 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zltl2" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-" May 15 12:40:43.174811 containerd[1570]: 2025-05-15 12:40:42.964 [INFO][3829] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zltl2" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" May 15 12:40:43.174811 containerd[1570]: 2025-05-15 12:40:43.077 [INFO][3898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" HandleID="k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Workload="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.089 [INFO][3898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" HandleID="k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Workload="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031c070), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-125-189", "pod":"coredns-7db6d8ff4d-zltl2", "timestamp":"2025-05-15 12:40:43.077735363 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.090 [INFO][3898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.090 [INFO][3898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.090 [INFO][3898] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.092 [INFO][3898] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" host="172-236-125-189" May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.098 [INFO][3898] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.103 [INFO][3898] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.105 [INFO][3898] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.107 [INFO][3898] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:43.175087 containerd[1570]: 2025-05-15 12:40:43.109 [INFO][3898] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" host="172-236-125-189" May 15 12:40:43.175296 containerd[1570]: 2025-05-15 12:40:43.111 [INFO][3898] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c May 15 12:40:43.175296 containerd[1570]: 2025-05-15 12:40:43.115 [INFO][3898] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" host="172-236-125-189" May 15 12:40:43.175296 containerd[1570]: 2025-05-15 12:40:43.122 [INFO][3898] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.129/26] block=192.168.83.128/26 handle="k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" host="172-236-125-189" May 15 12:40:43.175296 containerd[1570]: 2025-05-15 12:40:43.122 [INFO][3898] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.129/26] handle="k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" host="172-236-125-189" May 15 12:40:43.175296 containerd[1570]: 2025-05-15 12:40:43.122 [INFO][3898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:40:43.175296 containerd[1570]: 2025-05-15 12:40:43.122 [INFO][3898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.129/26] IPv6=[] ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" HandleID="k8s-pod-network.5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Workload="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" May 15 12:40:43.175412 kubelet[2830]: E0515 12:40:43.175106 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:43.175706 containerd[1570]: 2025-05-15 12:40:43.132 [INFO][3829] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zltl2" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f6c810d1-56b7-4269-a991-f69aed60bb27", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"coredns-7db6d8ff4d-zltl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali167fdde3099", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:43.179690 containerd[1570]: 2025-05-15 12:40:43.132 [INFO][3829] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.129/32] ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zltl2" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" May 15 12:40:43.179690 containerd[1570]: 2025-05-15 12:40:43.132 [INFO][3829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali167fdde3099 ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zltl2" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" May 15 12:40:43.179690 containerd[1570]: 2025-05-15 12:40:43.146 [INFO][3829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zltl2" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" May 15 12:40:43.179956 containerd[1570]: 2025-05-15 12:40:43.146 [INFO][3829] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zltl2" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f6c810d1-56b7-4269-a991-f69aed60bb27", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c", Pod:"coredns-7db6d8ff4d-zltl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali167fdde3099", MAC:"02:67:ba:d5:8b:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:43.179956 containerd[1570]: 2025-05-15 12:40:43.157 [INFO][3829] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zltl2" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--zltl2-eth0" May 15 12:40:43.287563 containerd[1570]: time="2025-05-15T12:40:43.287022179Z" level=info msg="connecting to shim 5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c" address="unix:///run/containerd/s/67b657b2451bd59b1631ed6511bfa60ab2736f194b2eef613f80c1f39720faf5" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:43.354383 systemd[1]: Started cri-containerd-5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c.scope - libcontainer container 5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c. May 15 12:40:43.436194 containerd[1570]: time="2025-05-15T12:40:43.436062432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zltl2,Uid:f6c810d1-56b7-4269-a991-f69aed60bb27,Namespace:kube-system,Attempt:0,} returns sandbox id \"5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c\"" May 15 12:40:43.438725 kubelet[2830]: E0515 12:40:43.438703 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:43.443338 containerd[1570]: time="2025-05-15T12:40:43.443303801Z" level=info msg="CreateContainer within sandbox \"5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:40:43.457886 containerd[1570]: time="2025-05-15T12:40:43.457757004Z" level=info msg="Container b4cbb028545dfe4d8ae99f963ecd9e2cc10e3d298c1ed84fe926dad2d0842d6a: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:43.464473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485164719.mount: Deactivated successfully. May 15 12:40:43.470892 containerd[1570]: time="2025-05-15T12:40:43.470770286Z" level=info msg="CreateContainer within sandbox \"5407125ade3d68004fd976e1c209ad1fa7fd246af1566509325a9a05aa0ef90c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4cbb028545dfe4d8ae99f963ecd9e2cc10e3d298c1ed84fe926dad2d0842d6a\"" May 15 12:40:43.477783 containerd[1570]: time="2025-05-15T12:40:43.477591312Z" level=info msg="StartContainer for \"b4cbb028545dfe4d8ae99f963ecd9e2cc10e3d298c1ed84fe926dad2d0842d6a\"" May 15 12:40:43.481004 containerd[1570]: time="2025-05-15T12:40:43.480949958Z" level=info msg="connecting to shim b4cbb028545dfe4d8ae99f963ecd9e2cc10e3d298c1ed84fe926dad2d0842d6a" address="unix:///run/containerd/s/67b657b2451bd59b1631ed6511bfa60ab2736f194b2eef613f80c1f39720faf5" protocol=ttrpc version=3 May 15 12:40:43.515844 systemd[1]: Started cri-containerd-b4cbb028545dfe4d8ae99f963ecd9e2cc10e3d298c1ed84fe926dad2d0842d6a.scope - libcontainer container b4cbb028545dfe4d8ae99f963ecd9e2cc10e3d298c1ed84fe926dad2d0842d6a. May 15 12:40:43.517455 containerd[1570]: time="2025-05-15T12:40:43.515468384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" id:\"98f2177bd6362584e6f5f874284758be7a739fcbbcfafe659457210b79a33547\" pid:3961 exit_status:1 exited_at:{seconds:1747312843 nanos:514326096}" May 15 12:40:43.572267 containerd[1570]: time="2025-05-15T12:40:43.572214931Z" level=info msg="StartContainer for \"b4cbb028545dfe4d8ae99f963ecd9e2cc10e3d298c1ed84fe926dad2d0842d6a\" returns successfully" May 15 12:40:43.806378 systemd-networkd[1466]: vxlan.calico: Link UP May 15 12:40:43.806389 systemd-networkd[1466]: vxlan.calico: Gained carrier May 15 12:40:44.177281 kubelet[2830]: E0515 12:40:44.176540 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:44.193401 kubelet[2830]: I0515 12:40:44.193334 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zltl2" podStartSLOduration=45.193316369 podStartE2EDuration="45.193316369s" podCreationTimestamp="2025-05-15 12:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:40:44.192669975 +0000 UTC m=+58.429908849" watchObservedRunningTime="2025-05-15 12:40:44.193316369 +0000 UTC m=+58.430555243" May 15 12:40:45.046232 systemd-networkd[1466]: cali167fdde3099: Gained IPv6LL May 15 12:40:45.174104 systemd-networkd[1466]: vxlan.calico: Gained IPv6LL May 15 12:40:45.178053 kubelet[2830]: E0515 12:40:45.178027 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:45.886007 containerd[1570]: time="2025-05-15T12:40:45.885611638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b5784dc8-8lbpp,Uid:28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9,Namespace:calico-apiserver,Attempt:0,}" May 15 12:40:45.894381 containerd[1570]: time="2025-05-15T12:40:45.889994493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794557d677-skbcb,Uid:086a9281-b1a7-45e0-92ca-5dca97c27bd4,Namespace:calico-apiserver,Attempt:0,}" May 15 12:40:46.087387 systemd-networkd[1466]: calia21bf329e8b: Link UP May 15 12:40:46.088045 systemd-networkd[1466]: calia21bf329e8b: Gained carrier May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:45.963 [INFO][4161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0 calico-apiserver-794557d677- calico-apiserver 086a9281-b1a7-45e0-92ca-5dca97c27bd4 818 0 2025-05-15 12:40:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:794557d677 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-125-189 calico-apiserver-794557d677-skbcb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia21bf329e8b [] []}} ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-skbcb" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:45.964 [INFO][4161] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-skbcb" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.022 [INFO][4185] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" HandleID="k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Workload="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.035 [INFO][4185] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" HandleID="k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Workload="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003340e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-125-189", "pod":"calico-apiserver-794557d677-skbcb", "timestamp":"2025-05-15 12:40:46.021325243 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.036 [INFO][4185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.036 [INFO][4185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.036 [INFO][4185] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.038 [INFO][4185] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.044 [INFO][4185] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.054 [INFO][4185] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.055 [INFO][4185] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.057 [INFO][4185] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.058 [INFO][4185] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.059 [INFO][4185] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7 May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.065 [INFO][4185] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.073 [INFO][4185] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.130/26] block=192.168.83.128/26 handle="k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.073 [INFO][4185] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.130/26] handle="k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" host="172-236-125-189" May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.073 [INFO][4185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:40:46.119458 containerd[1570]: 2025-05-15 12:40:46.073 [INFO][4185] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.130/26] IPv6=[] ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" HandleID="k8s-pod-network.38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Workload="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" May 15 12:40:46.120068 containerd[1570]: 2025-05-15 12:40:46.080 [INFO][4161] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-skbcb" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0", GenerateName:"calico-apiserver-794557d677-", Namespace:"calico-apiserver", SelfLink:"", UID:"086a9281-b1a7-45e0-92ca-5dca97c27bd4", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794557d677", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"calico-apiserver-794557d677-skbcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia21bf329e8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:46.120068 containerd[1570]: 2025-05-15 12:40:46.080 [INFO][4161] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.130/32] ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-skbcb" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" May 15 12:40:46.120068 containerd[1570]: 2025-05-15 12:40:46.080 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia21bf329e8b ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-skbcb" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" May 15 12:40:46.120068 containerd[1570]: 2025-05-15 12:40:46.088 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-skbcb" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" May 15 12:40:46.120068 containerd[1570]: 2025-05-15 12:40:46.089 [INFO][4161] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-skbcb" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0", GenerateName:"calico-apiserver-794557d677-", Namespace:"calico-apiserver", SelfLink:"", UID:"086a9281-b1a7-45e0-92ca-5dca97c27bd4", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794557d677", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7", Pod:"calico-apiserver-794557d677-skbcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia21bf329e8b", MAC:"de:28:06:82:43:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:46.120068 containerd[1570]: 2025-05-15 12:40:46.110 [INFO][4161] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-skbcb" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--skbcb-eth0" May 15 12:40:46.169555 systemd-networkd[1466]: cali517ff4fc318: Link UP May 15 12:40:46.172191 systemd-networkd[1466]: cali517ff4fc318: Gained carrier May 15 12:40:46.180438 containerd[1570]: time="2025-05-15T12:40:46.180386705Z" level=info msg="connecting to shim 38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7" address="unix:///run/containerd/s/0963b49b9ce2d1c36988085f8af5d7f3d5b61b38c939f267ddf6d6141a37852c" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:46.192153 kubelet[2830]: E0515 12:40:46.192105 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:46.229089 systemd[1]: Started cri-containerd-38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7.scope - libcontainer container 38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7. May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:45.959 [INFO][4159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0 calico-apiserver-78b5784dc8- calico-apiserver 28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9 815 0 2025-05-15 12:40:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78b5784dc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-125-189 calico-apiserver-78b5784dc8-8lbpp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali517ff4fc318 [] []}} ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-8lbpp" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:45.960 [INFO][4159] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-8lbpp" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.021 [INFO][4183] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.037 [INFO][4183] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a00f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-125-189", "pod":"calico-apiserver-78b5784dc8-8lbpp", "timestamp":"2025-05-15 12:40:46.021712649 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.037 [INFO][4183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.073 [INFO][4183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.073 [INFO][4183] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.075 [INFO][4183] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.089 [INFO][4183] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.098 [INFO][4183] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.103 [INFO][4183] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.105 [INFO][4183] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.105 [INFO][4183] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.110 [INFO][4183] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.117 [INFO][4183] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.141 [INFO][4183] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.131/26] block=192.168.83.128/26 handle="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.141 [INFO][4183] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.131/26] handle="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" host="172-236-125-189" May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.141 [INFO][4183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:40:46.247825 containerd[1570]: 2025-05-15 12:40:46.141 [INFO][4183] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.131/26] IPv6=[] ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:40:46.248545 containerd[1570]: 2025-05-15 12:40:46.158 [INFO][4159] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-8lbpp" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0", GenerateName:"calico-apiserver-78b5784dc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78b5784dc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"calico-apiserver-78b5784dc8-8lbpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali517ff4fc318", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:46.248545 containerd[1570]: 2025-05-15 12:40:46.158 [INFO][4159] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.131/32] ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-8lbpp" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:40:46.248545 containerd[1570]: 2025-05-15 12:40:46.158 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali517ff4fc318 ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-8lbpp" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:40:46.248545 containerd[1570]: 2025-05-15 12:40:46.174 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-8lbpp" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:40:46.248545 containerd[1570]: 2025-05-15 12:40:46.174 [INFO][4159] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-8lbpp" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0", GenerateName:"calico-apiserver-78b5784dc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78b5784dc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a", Pod:"calico-apiserver-78b5784dc8-8lbpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali517ff4fc318", MAC:"0a:0c:86:5f:c2:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:46.248545 containerd[1570]: 2025-05-15 12:40:46.234 [INFO][4159] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-8lbpp" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:40:46.300234 containerd[1570]: time="2025-05-15T12:40:46.300159872Z" level=info msg="connecting to shim fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" address="unix:///run/containerd/s/0fe900a04d793088162ca5f187c4b740978aae8ffa8495d71fb453aa7c5a8057" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:46.346104 systemd[1]: Started cri-containerd-fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a.scope - libcontainer container fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a. May 15 12:40:46.469438 containerd[1570]: time="2025-05-15T12:40:46.469261899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794557d677-skbcb,Uid:086a9281-b1a7-45e0-92ca-5dca97c27bd4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7\"" May 15 12:40:46.472777 containerd[1570]: time="2025-05-15T12:40:46.472728428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:40:46.482358 containerd[1570]: time="2025-05-15T12:40:46.482256469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b5784dc8-8lbpp,Uid:28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\"" May 15 12:40:46.885852 kubelet[2830]: E0515 12:40:46.885441 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:46.886873 containerd[1570]: time="2025-05-15T12:40:46.886469344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b5784dc8-mxm9v,Uid:f51c33cf-e651-4159-ba52-866ced1779f7,Namespace:calico-apiserver,Attempt:0,}" May 15 12:40:46.887445 containerd[1570]: time="2025-05-15T12:40:46.887410209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4bb544b7-zbnfw,Uid:b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d,Namespace:calico-system,Attempt:0,}" May 15 12:40:46.887654 containerd[1570]: time="2025-05-15T12:40:46.887606707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vnsrk,Uid:d81f736f-2cfe-4dd7-8bae-39e5d7b0171c,Namespace:kube-system,Attempt:0,}" May 15 12:40:47.062105 systemd-networkd[1466]: caliee8ac216aca: Link UP May 15 12:40:47.062880 systemd-networkd[1466]: caliee8ac216aca: Gained carrier May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:46.953 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0 calico-kube-controllers-b4bb544b7- calico-system b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d 816 0 2025-05-15 12:40:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b4bb544b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-125-189 calico-kube-controllers-b4bb544b7-zbnfw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliee8ac216aca [] []}} ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Namespace="calico-system" Pod="calico-kube-controllers-b4bb544b7-zbnfw" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:46.953 [INFO][4325] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Namespace="calico-system" Pod="calico-kube-controllers-b4bb544b7-zbnfw" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.007 [INFO][4355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.019 [INFO][4355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bd70), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-125-189", "pod":"calico-kube-controllers-b4bb544b7-zbnfw", "timestamp":"2025-05-15 12:40:47.005724965 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.019 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.020 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.020 [INFO][4355] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.023 [INFO][4355] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.030 [INFO][4355] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.034 [INFO][4355] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.036 [INFO][4355] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.038 [INFO][4355] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.038 [INFO][4355] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.039 [INFO][4355] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9 May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.043 [INFO][4355] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.049 [INFO][4355] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.132/26] block=192.168.83.128/26 handle="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.049 [INFO][4355] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.132/26] handle="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" host="172-236-125-189" May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.049 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:40:47.090962 containerd[1570]: 2025-05-15 12:40:47.049 [INFO][4355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.132/26] IPv6=[] ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:40:47.091697 containerd[1570]: 2025-05-15 12:40:47.054 [INFO][4325] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Namespace="calico-system" Pod="calico-kube-controllers-b4bb544b7-zbnfw" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0", GenerateName:"calico-kube-controllers-b4bb544b7-", Namespace:"calico-system", SelfLink:"", UID:"b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4bb544b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"calico-kube-controllers-b4bb544b7-zbnfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliee8ac216aca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:47.091697 containerd[1570]: 2025-05-15 12:40:47.055 [INFO][4325] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.132/32] ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Namespace="calico-system" Pod="calico-kube-controllers-b4bb544b7-zbnfw" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:40:47.091697 containerd[1570]: 2025-05-15 12:40:47.055 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee8ac216aca ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Namespace="calico-system" Pod="calico-kube-controllers-b4bb544b7-zbnfw" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:40:47.091697 containerd[1570]: 2025-05-15 12:40:47.063 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Namespace="calico-system" Pod="calico-kube-controllers-b4bb544b7-zbnfw" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:40:47.091697 containerd[1570]: 2025-05-15 12:40:47.064 [INFO][4325] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Namespace="calico-system" Pod="calico-kube-controllers-b4bb544b7-zbnfw" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0", GenerateName:"calico-kube-controllers-b4bb544b7-", Namespace:"calico-system", SelfLink:"", UID:"b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4bb544b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9", Pod:"calico-kube-controllers-b4bb544b7-zbnfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliee8ac216aca", MAC:"76:0d:65:8c:3c:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:47.091697 containerd[1570]: 2025-05-15 12:40:47.082 [INFO][4325] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Namespace="calico-system" Pod="calico-kube-controllers-b4bb544b7-zbnfw" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:40:47.133013 systemd-networkd[1466]: cali1e217452c7d: Link UP May 15 12:40:47.135199 systemd-networkd[1466]: cali1e217452c7d: Gained carrier May 15 12:40:47.162462 containerd[1570]: time="2025-05-15T12:40:47.162119979Z" level=info msg="connecting to shim 5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" address="unix:///run/containerd/s/857cd5d336b7c622432bd7407f41ccd98a55e0eca72e9e1e9d727ea585eddded" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:46.957 [INFO][4318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0 calico-apiserver-78b5784dc8- calico-apiserver f51c33cf-e651-4159-ba52-866ced1779f7 817 0 2025-05-15 12:40:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78b5784dc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-125-189 calico-apiserver-78b5784dc8-mxm9v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e217452c7d [] []}} ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-mxm9v" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:46.957 [INFO][4318] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-mxm9v" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.012 [INFO][4357] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.024 [INFO][4357] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-125-189", "pod":"calico-apiserver-78b5784dc8-mxm9v", "timestamp":"2025-05-15 12:40:47.010363534 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.025 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.054 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.054 [INFO][4357] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.058 [INFO][4357] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.065 [INFO][4357] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.074 [INFO][4357] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.084 [INFO][4357] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.093 [INFO][4357] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.094 [INFO][4357] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.097 [INFO][4357] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.102 [INFO][4357] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.114 [INFO][4357] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.133/26] block=192.168.83.128/26 handle="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.115 [INFO][4357] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.133/26] handle="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" host="172-236-125-189" May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.116 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:40:47.173923 containerd[1570]: 2025-05-15 12:40:47.116 [INFO][4357] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.133/26] IPv6=[] ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:40:47.174625 containerd[1570]: 2025-05-15 12:40:47.120 [INFO][4318] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-mxm9v" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0", GenerateName:"calico-apiserver-78b5784dc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f51c33cf-e651-4159-ba52-866ced1779f7", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78b5784dc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"calico-apiserver-78b5784dc8-mxm9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e217452c7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:47.174625 containerd[1570]: 2025-05-15 12:40:47.120 [INFO][4318] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.133/32] ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-mxm9v" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:40:47.174625 containerd[1570]: 2025-05-15 12:40:47.121 [INFO][4318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e217452c7d ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-mxm9v" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:40:47.174625 containerd[1570]: 2025-05-15 12:40:47.137 [INFO][4318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-mxm9v" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:40:47.174625 containerd[1570]: 2025-05-15 12:40:47.138 [INFO][4318] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-mxm9v" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0", GenerateName:"calico-apiserver-78b5784dc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f51c33cf-e651-4159-ba52-866ced1779f7", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78b5784dc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d", Pod:"calico-apiserver-78b5784dc8-mxm9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e217452c7d", MAC:"aa:62:a4:a9:28:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:47.174625 containerd[1570]: 2025-05-15 12:40:47.163 [INFO][4318] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Namespace="calico-apiserver" Pod="calico-apiserver-78b5784dc8-mxm9v" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:40:47.235340 systemd[1]: Started cri-containerd-5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9.scope - libcontainer container 5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9. May 15 12:40:47.244314 systemd-networkd[1466]: caliacc516dd984: Link UP May 15 12:40:47.246499 systemd-networkd[1466]: caliacc516dd984: Gained carrier May 15 12:40:47.253402 containerd[1570]: time="2025-05-15T12:40:47.253346160Z" level=info msg="connecting to shim d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" address="unix:///run/containerd/s/9e5f6758bbfe6e190b9dbed4f1f3114ac4a87d4b2ed9bcbfdb6f86e212d0d306" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:46.970 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0 coredns-7db6d8ff4d- kube-system d81f736f-2cfe-4dd7-8bae-39e5d7b0171c 813 0 2025-05-15 12:40:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-125-189 coredns-7db6d8ff4d-vnsrk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliacc516dd984 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vnsrk" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:46.970 [INFO][4333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vnsrk" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.017 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" HandleID="k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Workload="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.030 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" HandleID="k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Workload="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335450), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-125-189", "pod":"coredns-7db6d8ff4d-vnsrk", "timestamp":"2025-05-15 12:40:47.015240267 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.030 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.116 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.117 [INFO][4365] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.130 [INFO][4365] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.141 [INFO][4365] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.164 [INFO][4365] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.168 [INFO][4365] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.181 [INFO][4365] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.183 [INFO][4365] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.187 [INFO][4365] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1 May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.202 [INFO][4365] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.226 [INFO][4365] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.134/26] block=192.168.83.128/26 handle="k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.226 [INFO][4365] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.134/26] handle="k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" host="172-236-125-189" May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.226 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:40:47.272678 containerd[1570]: 2025-05-15 12:40:47.226 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.134/26] IPv6=[] ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" HandleID="k8s-pod-network.aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Workload="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" May 15 12:40:47.274073 containerd[1570]: 2025-05-15 12:40:47.234 [INFO][4333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vnsrk" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d81f736f-2cfe-4dd7-8bae-39e5d7b0171c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"coredns-7db6d8ff4d-vnsrk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacc516dd984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:47.274073 containerd[1570]: 2025-05-15 12:40:47.235 [INFO][4333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.134/32] ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vnsrk" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" May 15 12:40:47.274073 containerd[1570]: 2025-05-15 12:40:47.235 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacc516dd984 ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vnsrk" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" May 15 12:40:47.274073 containerd[1570]: 2025-05-15 12:40:47.246 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vnsrk" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" May 15 12:40:47.274073 containerd[1570]: 2025-05-15 12:40:47.246 [INFO][4333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vnsrk" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d81f736f-2cfe-4dd7-8bae-39e5d7b0171c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1", Pod:"coredns-7db6d8ff4d-vnsrk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacc516dd984", MAC:"12:fb:51:b1:be:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:47.274073 containerd[1570]: 2025-05-15 12:40:47.264 [INFO][4333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vnsrk" WorkloadEndpoint="172--236--125--189-k8s-coredns--7db6d8ff4d--vnsrk-eth0" May 15 12:40:47.302145 systemd[1]: Started cri-containerd-d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d.scope - libcontainer container d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d. May 15 12:40:47.322510 containerd[1570]: time="2025-05-15T12:40:47.322461811Z" level=info msg="connecting to shim aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1" address="unix:///run/containerd/s/30d0a8994c1c1e1b1ec9ac0f3d9baa32dcbdb61f5ddc08336507238a8ec4647b" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:47.368990 containerd[1570]: time="2025-05-15T12:40:47.367885220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4bb544b7-zbnfw,Uid:b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\"" May 15 12:40:47.373605 systemd[1]: Started cri-containerd-aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1.scope - libcontainer container aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1. May 15 12:40:47.400236 containerd[1570]: time="2025-05-15T12:40:47.400184496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b5784dc8-mxm9v,Uid:f51c33cf-e651-4159-ba52-866ced1779f7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\"" May 15 12:40:47.438251 containerd[1570]: time="2025-05-15T12:40:47.438129634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vnsrk,Uid:d81f736f-2cfe-4dd7-8bae-39e5d7b0171c,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1\"" May 15 12:40:47.440250 kubelet[2830]: E0515 12:40:47.440224 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:47.442610 containerd[1570]: time="2025-05-15T12:40:47.442540384Z" level=info msg="CreateContainer within sandbox \"aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:40:47.452961 containerd[1570]: time="2025-05-15T12:40:47.452914104Z" level=info msg="Container b1c2b51ea3ff83be844afe58fa98b10183f5f4d9c597ede8fba366da8fc6e35f: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:47.457451 containerd[1570]: time="2025-05-15T12:40:47.457375606Z" level=info msg="CreateContainer within sandbox \"aa03b344e7123a924e6412aff41b2aa226f3c9342864bf1ad4165677c1964ad1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1c2b51ea3ff83be844afe58fa98b10183f5f4d9c597ede8fba366da8fc6e35f\"" May 15 12:40:47.458111 containerd[1570]: time="2025-05-15T12:40:47.458072966Z" level=info msg="StartContainer for \"b1c2b51ea3ff83be844afe58fa98b10183f5f4d9c597ede8fba366da8fc6e35f\"" May 15 12:40:47.459211 containerd[1570]: time="2025-05-15T12:40:47.459178752Z" level=info msg="connecting to shim b1c2b51ea3ff83be844afe58fa98b10183f5f4d9c597ede8fba366da8fc6e35f" address="unix:///run/containerd/s/30d0a8994c1c1e1b1ec9ac0f3d9baa32dcbdb61f5ddc08336507238a8ec4647b" protocol=ttrpc version=3 May 15 12:40:47.481109 systemd[1]: Started cri-containerd-b1c2b51ea3ff83be844afe58fa98b10183f5f4d9c597ede8fba366da8fc6e35f.scope - libcontainer container b1c2b51ea3ff83be844afe58fa98b10183f5f4d9c597ede8fba366da8fc6e35f. May 15 12:40:47.512861 containerd[1570]: time="2025-05-15T12:40:47.512782419Z" level=info msg="StartContainer for \"b1c2b51ea3ff83be844afe58fa98b10183f5f4d9c597ede8fba366da8fc6e35f\" returns successfully" May 15 12:40:47.886570 containerd[1570]: time="2025-05-15T12:40:47.886500535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n6z76,Uid:2f1afa6e-6224-473c-8d91-9f8e0eedd57e,Namespace:calico-system,Attempt:0,}" May 15 12:40:48.001371 systemd-networkd[1466]: calib8cc11d55e4: Link UP May 15 12:40:48.002075 systemd-networkd[1466]: calib8cc11d55e4: Gained carrier May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.929 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-csi--node--driver--n6z76-eth0 csi-node-driver- calico-system 2f1afa6e-6224-473c-8d91-9f8e0eedd57e 664 0 2025-05-15 12:40:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-125-189 csi-node-driver-n6z76 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib8cc11d55e4 [] []}} ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Namespace="calico-system" Pod="csi-node-driver-n6z76" WorkloadEndpoint="172--236--125--189-k8s-csi--node--driver--n6z76-" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.929 [INFO][4583] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Namespace="calico-system" Pod="csi-node-driver-n6z76" WorkloadEndpoint="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.956 [INFO][4595] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" HandleID="k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Workload="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.964 [INFO][4595] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" HandleID="k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Workload="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-125-189", "pod":"csi-node-driver-n6z76", "timestamp":"2025-05-15 12:40:47.956667334 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.965 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.965 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.965 [INFO][4595] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.966 [INFO][4595] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.970 [INFO][4595] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.974 [INFO][4595] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.975 [INFO][4595] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.977 [INFO][4595] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.977 [INFO][4595] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.979 [INFO][4595] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.983 [INFO][4595] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.989 [INFO][4595] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.135/26] block=192.168.83.128/26 handle="k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.989 [INFO][4595] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.135/26] handle="k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" host="172-236-125-189" May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.989 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:40:48.017396 containerd[1570]: 2025-05-15 12:40:47.989 [INFO][4595] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.135/26] IPv6=[] ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" HandleID="k8s-pod-network.f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Workload="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" May 15 12:40:48.018675 containerd[1570]: 2025-05-15 12:40:47.992 [INFO][4583] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Namespace="calico-system" Pod="csi-node-driver-n6z76" WorkloadEndpoint="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-csi--node--driver--n6z76-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f1afa6e-6224-473c-8d91-9f8e0eedd57e", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"csi-node-driver-n6z76", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib8cc11d55e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:48.018675 containerd[1570]: 2025-05-15 12:40:47.993 [INFO][4583] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.135/32] ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Namespace="calico-system" Pod="csi-node-driver-n6z76" WorkloadEndpoint="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" May 15 12:40:48.018675 containerd[1570]: 2025-05-15 12:40:47.993 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8cc11d55e4 ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Namespace="calico-system" Pod="csi-node-driver-n6z76" WorkloadEndpoint="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" May 15 12:40:48.018675 containerd[1570]: 2025-05-15 12:40:47.999 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Namespace="calico-system" Pod="csi-node-driver-n6z76" WorkloadEndpoint="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" May 15 12:40:48.018675 containerd[1570]: 2025-05-15 12:40:48.000 [INFO][4583] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Namespace="calico-system" Pod="csi-node-driver-n6z76" WorkloadEndpoint="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-csi--node--driver--n6z76-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f1afa6e-6224-473c-8d91-9f8e0eedd57e", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 40, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee", Pod:"csi-node-driver-n6z76", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib8cc11d55e4", MAC:"0a:2b:7a:a0:ef:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:40:48.018675 containerd[1570]: 2025-05-15 12:40:48.013 [INFO][4583] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" Namespace="calico-system" Pod="csi-node-driver-n6z76" WorkloadEndpoint="172--236--125--189-k8s-csi--node--driver--n6z76-eth0" May 15 12:40:48.045940 containerd[1570]: time="2025-05-15T12:40:48.045856386Z" level=info msg="connecting to shim f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee" address="unix:///run/containerd/s/ccefcdcd635cc751dd2c51f5c138c057a071f41cd9a743c4e647acb0050a5a7c" namespace=k8s.io protocol=ttrpc version=3 May 15 12:40:48.055062 systemd-networkd[1466]: cali517ff4fc318: Gained IPv6LL May 15 12:40:48.055361 systemd-networkd[1466]: calia21bf329e8b: Gained IPv6LL May 15 12:40:48.080109 systemd[1]: Started cri-containerd-f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee.scope - libcontainer container f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee. May 15 12:40:48.114209 containerd[1570]: time="2025-05-15T12:40:48.114171885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n6z76,Uid:2f1afa6e-6224-473c-8d91-9f8e0eedd57e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee\"" May 15 12:40:48.182182 systemd-networkd[1466]: cali1e217452c7d: Gained IPv6LL May 15 12:40:48.212544 kubelet[2830]: E0515 12:40:48.211673 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:48.239492 kubelet[2830]: I0515 12:40:48.239426 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vnsrk" podStartSLOduration=48.239298298 podStartE2EDuration="48.239298298s" podCreationTimestamp="2025-05-15 12:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:40:48.224221222 +0000 UTC m=+62.461460096" watchObservedRunningTime="2025-05-15 12:40:48.239298298 +0000 UTC m=+62.476537172" May 15 12:40:48.505945 systemd-networkd[1466]: caliee8ac216aca: Gained IPv6LL May 15 12:40:48.950398 systemd-networkd[1466]: caliacc516dd984: Gained IPv6LL May 15 12:40:49.217116 kubelet[2830]: E0515 12:40:49.216736 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:49.974181 systemd-networkd[1466]: calib8cc11d55e4: Gained IPv6LL May 15 12:40:50.219024 kubelet[2830]: E0515 12:40:50.218954 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:51.372128 containerd[1570]: time="2025-05-15T12:40:51.372067298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:51.373169 containerd[1570]: time="2025-05-15T12:40:51.373008748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 15 12:40:51.373693 containerd[1570]: time="2025-05-15T12:40:51.373671157Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:51.375066 containerd[1570]: time="2025-05-15T12:40:51.375039044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:51.376048 containerd[1570]: time="2025-05-15T12:40:51.375699024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.902910822s" May 15 12:40:51.376048 containerd[1570]: time="2025-05-15T12:40:51.375727880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 12:40:51.377146 containerd[1570]: time="2025-05-15T12:40:51.377130080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:40:51.380425 containerd[1570]: time="2025-05-15T12:40:51.380400347Z" level=info msg="CreateContainer within sandbox \"38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:40:51.388797 containerd[1570]: time="2025-05-15T12:40:51.388110248Z" level=info msg="Container 32d22fa7a610403637c6cf0e2cc1b63043f8a8bc70a15c3232871a31fe433038: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:51.400550 containerd[1570]: time="2025-05-15T12:40:51.400502091Z" level=info msg="CreateContainer within sandbox \"38a1229e305db91e48fc987ac62e1a0a74b70c72bb77b45e8aafea95bf32d6c7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32d22fa7a610403637c6cf0e2cc1b63043f8a8bc70a15c3232871a31fe433038\"" May 15 12:40:51.402125 containerd[1570]: time="2025-05-15T12:40:51.401290598Z" level=info msg="StartContainer for \"32d22fa7a610403637c6cf0e2cc1b63043f8a8bc70a15c3232871a31fe433038\"" May 15 12:40:51.402517 containerd[1570]: time="2025-05-15T12:40:51.402478515Z" level=info msg="connecting to shim 32d22fa7a610403637c6cf0e2cc1b63043f8a8bc70a15c3232871a31fe433038" address="unix:///run/containerd/s/0963b49b9ce2d1c36988085f8af5d7f3d5b61b38c939f267ddf6d6141a37852c" protocol=ttrpc version=3 May 15 12:40:51.451212 systemd[1]: Started cri-containerd-32d22fa7a610403637c6cf0e2cc1b63043f8a8bc70a15c3232871a31fe433038.scope - libcontainer container 32d22fa7a610403637c6cf0e2cc1b63043f8a8bc70a15c3232871a31fe433038. May 15 12:40:51.509919 containerd[1570]: time="2025-05-15T12:40:51.509873387Z" level=info msg="StartContainer for \"32d22fa7a610403637c6cf0e2cc1b63043f8a8bc70a15c3232871a31fe433038\" returns successfully" May 15 12:40:51.770377 containerd[1570]: time="2025-05-15T12:40:51.769156359Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:51.770701 containerd[1570]: time="2025-05-15T12:40:51.769933781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 12:40:51.771353 containerd[1570]: time="2025-05-15T12:40:51.771228944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 393.904542ms" May 15 12:40:51.771353 containerd[1570]: time="2025-05-15T12:40:51.771258270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 12:40:51.772956 containerd[1570]: time="2025-05-15T12:40:51.772931245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 12:40:51.774217 containerd[1570]: time="2025-05-15T12:40:51.774148717Z" level=info msg="CreateContainer within sandbox \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:40:51.785883 containerd[1570]: time="2025-05-15T12:40:51.785055002Z" level=info msg="Container d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:51.802712 containerd[1570]: time="2025-05-15T12:40:51.802665829Z" level=info msg="CreateContainer within sandbox \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\"" May 15 12:40:51.803388 containerd[1570]: time="2025-05-15T12:40:51.803364840Z" level=info msg="StartContainer for \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\"" May 15 12:40:51.808244 containerd[1570]: time="2025-05-15T12:40:51.808202965Z" level=info msg="connecting to shim d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286" address="unix:///run/containerd/s/0fe900a04d793088162ca5f187c4b740978aae8ffa8495d71fb453aa7c5a8057" protocol=ttrpc version=3 May 15 12:40:51.836109 systemd[1]: Started cri-containerd-d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286.scope - libcontainer container d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286. May 15 12:40:51.906049 containerd[1570]: time="2025-05-15T12:40:51.905927506Z" level=info msg="StartContainer for \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" returns successfully" May 15 12:40:52.247377 kubelet[2830]: I0515 12:40:52.247305 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-794557d677-skbcb" podStartSLOduration=39.342413886 podStartE2EDuration="44.247284443s" podCreationTimestamp="2025-05-15 12:40:08 +0000 UTC" firstStartedPulling="2025-05-15 12:40:46.471920202 +0000 UTC m=+60.709159076" lastFinishedPulling="2025-05-15 12:40:51.376790759 +0000 UTC m=+65.614029633" observedRunningTime="2025-05-15 12:40:52.246258158 +0000 UTC m=+66.483497041" watchObservedRunningTime="2025-05-15 12:40:52.247284443 +0000 UTC m=+66.484523317" May 15 12:40:53.235154 kubelet[2830]: I0515 12:40:53.235008 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:40:53.235154 kubelet[2830]: I0515 12:40:53.235081 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:40:54.885938 kubelet[2830]: E0515 12:40:54.885875 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:40:55.323180 kubelet[2830]: I0515 12:40:55.323027 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:40:55.373598 kubelet[2830]: I0515 12:40:55.373357 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78b5784dc8-8lbpp" podStartSLOduration=43.085122055 podStartE2EDuration="48.373335267s" podCreationTimestamp="2025-05-15 12:40:07 +0000 UTC" firstStartedPulling="2025-05-15 12:40:46.484016323 +0000 UTC m=+60.721255197" lastFinishedPulling="2025-05-15 12:40:51.772229535 +0000 UTC m=+66.009468409" observedRunningTime="2025-05-15 12:40:52.263527851 +0000 UTC m=+66.500766725" watchObservedRunningTime="2025-05-15 12:40:55.373335267 +0000 UTC m=+69.610574141" May 15 12:40:55.912800 containerd[1570]: time="2025-05-15T12:40:55.912707356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:55.913774 containerd[1570]: time="2025-05-15T12:40:55.913638282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 12:40:55.914951 containerd[1570]: time="2025-05-15T12:40:55.914897206Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:55.920262 containerd[1570]: time="2025-05-15T12:40:55.920202984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:55.926803 containerd[1570]: time="2025-05-15T12:40:55.926568192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.153604583s" May 15 12:40:55.927439 containerd[1570]: time="2025-05-15T12:40:55.927419193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 15 12:40:55.929415 containerd[1570]: time="2025-05-15T12:40:55.929380782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:40:55.950676 containerd[1570]: time="2025-05-15T12:40:55.950301045Z" level=info msg="CreateContainer within sandbox \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 12:40:55.958986 containerd[1570]: time="2025-05-15T12:40:55.957385002Z" level=info msg="Container bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:55.966651 containerd[1570]: time="2025-05-15T12:40:55.966566009Z" level=info msg="CreateContainer within sandbox \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\"" May 15 12:40:55.967452 containerd[1570]: time="2025-05-15T12:40:55.967437321Z" level=info msg="StartContainer for \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\"" May 15 12:40:55.968686 containerd[1570]: time="2025-05-15T12:40:55.968667527Z" level=info msg="connecting to shim bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884" address="unix:///run/containerd/s/857cd5d336b7c622432bd7407f41ccd98a55e0eca72e9e1e9d727ea585eddded" protocol=ttrpc version=3 May 15 12:40:56.003156 systemd[1]: Started cri-containerd-bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884.scope - libcontainer container bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884. May 15 12:40:56.071177 containerd[1570]: time="2025-05-15T12:40:56.071106971Z" level=info msg="StartContainer for \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" returns successfully" May 15 12:40:56.267750 kubelet[2830]: I0515 12:40:56.266549 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b4bb544b7-zbnfw" podStartSLOduration=39.708877809 podStartE2EDuration="48.266514371s" podCreationTimestamp="2025-05-15 12:40:08 +0000 UTC" firstStartedPulling="2025-05-15 12:40:47.371363495 +0000 UTC m=+61.608602369" lastFinishedPulling="2025-05-15 12:40:55.929000057 +0000 UTC m=+70.166238931" observedRunningTime="2025-05-15 12:40:56.263839041 +0000 UTC m=+70.501077915" watchObservedRunningTime="2025-05-15 12:40:56.266514371 +0000 UTC m=+70.503753245" May 15 12:40:56.795734 containerd[1570]: time="2025-05-15T12:40:56.795346773Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:56.796385 containerd[1570]: time="2025-05-15T12:40:56.796352641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 12:40:56.797948 containerd[1570]: time="2025-05-15T12:40:56.797817408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 868.398433ms" May 15 12:40:56.797948 containerd[1570]: time="2025-05-15T12:40:56.797847775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 12:40:56.801039 containerd[1570]: time="2025-05-15T12:40:56.800939101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 12:40:56.803421 containerd[1570]: time="2025-05-15T12:40:56.803349661Z" level=info msg="CreateContainer within sandbox \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:40:56.809592 containerd[1570]: time="2025-05-15T12:40:56.809565938Z" level=info msg="Container fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:56.814546 containerd[1570]: time="2025-05-15T12:40:56.814493525Z" level=info msg="CreateContainer within sandbox \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\"" May 15 12:40:56.816553 containerd[1570]: time="2025-05-15T12:40:56.816379625Z" level=info msg="StartContainer for \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\"" May 15 12:40:56.817634 containerd[1570]: time="2025-05-15T12:40:56.817614438Z" level=info msg="connecting to shim fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58" address="unix:///run/containerd/s/9e5f6758bbfe6e190b9dbed4f1f3114ac4a87d4b2ed9bcbfdb6f86e212d0d306" protocol=ttrpc version=3 May 15 12:40:56.846321 systemd[1]: Started cri-containerd-fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58.scope - libcontainer container fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58. May 15 12:40:56.953267 containerd[1570]: time="2025-05-15T12:40:56.953147261Z" level=info msg="StartContainer for \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" returns successfully" May 15 12:40:57.268472 kubelet[2830]: I0515 12:40:57.268190 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78b5784dc8-mxm9v" podStartSLOduration=40.870838225 podStartE2EDuration="50.268168497s" podCreationTimestamp="2025-05-15 12:40:07 +0000 UTC" firstStartedPulling="2025-05-15 12:40:47.401759013 +0000 UTC m=+61.638997887" lastFinishedPulling="2025-05-15 12:40:56.799089285 +0000 UTC m=+71.036328159" observedRunningTime="2025-05-15 12:40:57.267247749 +0000 UTC m=+71.504486623" watchObservedRunningTime="2025-05-15 12:40:57.268168497 +0000 UTC m=+71.505407371" May 15 12:40:57.348736 containerd[1570]: time="2025-05-15T12:40:57.348469843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" id:\"d52c25126b7cb2107161fc0d4cba60dc0bf2486d04dbcf59a42059c95baa8e45\" pid:4840 exited_at:{seconds:1747312857 nanos:348085638}" May 15 12:40:58.256278 kubelet[2830]: I0515 12:40:58.256218 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:40:58.378580 containerd[1570]: time="2025-05-15T12:40:58.378530508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:58.379417 containerd[1570]: time="2025-05-15T12:40:58.379388454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 15 12:40:58.380083 containerd[1570]: time="2025-05-15T12:40:58.380039200Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:58.381995 containerd[1570]: time="2025-05-15T12:40:58.381562016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:40:58.382233 containerd[1570]: time="2025-05-15T12:40:58.382206964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.580999846s" May 15 12:40:58.382268 containerd[1570]: time="2025-05-15T12:40:58.382238092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 12:40:58.385991 containerd[1570]: time="2025-05-15T12:40:58.385940428Z" level=info msg="CreateContainer within sandbox \"f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 12:40:58.395313 containerd[1570]: time="2025-05-15T12:40:58.395285874Z" level=info msg="Container 29b578514cca0f2a3a6b33b170d16f2bde3918d07f4cfd7bbe9e5ae0ce844385: CDI devices from CRI Config.CDIDevices: []" May 15 12:40:58.402051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204840230.mount: Deactivated successfully. May 15 12:40:58.404607 containerd[1570]: time="2025-05-15T12:40:58.404579499Z" level=info msg="CreateContainer within sandbox \"f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"29b578514cca0f2a3a6b33b170d16f2bde3918d07f4cfd7bbe9e5ae0ce844385\"" May 15 12:40:58.405341 containerd[1570]: time="2025-05-15T12:40:58.405318671Z" level=info msg="StartContainer for \"29b578514cca0f2a3a6b33b170d16f2bde3918d07f4cfd7bbe9e5ae0ce844385\"" May 15 12:40:58.406700 containerd[1570]: time="2025-05-15T12:40:58.406673933Z" level=info msg="connecting to shim 29b578514cca0f2a3a6b33b170d16f2bde3918d07f4cfd7bbe9e5ae0ce844385" address="unix:///run/containerd/s/ccefcdcd635cc751dd2c51f5c138c057a071f41cd9a743c4e647acb0050a5a7c" protocol=ttrpc version=3 May 15 12:40:58.439107 systemd[1]: Started cri-containerd-29b578514cca0f2a3a6b33b170d16f2bde3918d07f4cfd7bbe9e5ae0ce844385.scope - libcontainer container 29b578514cca0f2a3a6b33b170d16f2bde3918d07f4cfd7bbe9e5ae0ce844385. May 15 12:40:58.487858 containerd[1570]: time="2025-05-15T12:40:58.487813959Z" level=info msg="StartContainer for \"29b578514cca0f2a3a6b33b170d16f2bde3918d07f4cfd7bbe9e5ae0ce844385\" returns successfully" May 15 12:40:58.490338 containerd[1570]: time="2025-05-15T12:40:58.490085933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 12:41:01.258878 containerd[1570]: time="2025-05-15T12:41:01.258835927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:41:01.259783 containerd[1570]: time="2025-05-15T12:41:01.259648463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 15 12:41:01.260402 containerd[1570]: time="2025-05-15T12:41:01.260376788Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:41:01.261755 containerd[1570]: time="2025-05-15T12:41:01.261732543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:41:01.262617 containerd[1570]: time="2025-05-15T12:41:01.262488788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.77200468s" May 15 12:41:01.262617 containerd[1570]: time="2025-05-15T12:41:01.262520997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 12:41:01.265310 containerd[1570]: time="2025-05-15T12:41:01.265290897Z" level=info msg="CreateContainer within sandbox \"f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 12:41:01.271444 containerd[1570]: time="2025-05-15T12:41:01.270591521Z" level=info msg="Container fb6eb9e30771a96ea3703c89c55dc806036987edebeabfca2910e6062895d39d: CDI devices from CRI Config.CDIDevices: []" May 15 12:41:01.282093 containerd[1570]: time="2025-05-15T12:41:01.282054158Z" level=info msg="CreateContainer within sandbox \"f37316bf8af73fc9883e305e2dfc76db84fd711c65aab55aed37a15e1043fcee\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fb6eb9e30771a96ea3703c89c55dc806036987edebeabfca2910e6062895d39d\"" May 15 12:41:01.282663 containerd[1570]: time="2025-05-15T12:41:01.282638364Z" level=info msg="StartContainer for \"fb6eb9e30771a96ea3703c89c55dc806036987edebeabfca2910e6062895d39d\"" May 15 12:41:01.284462 containerd[1570]: time="2025-05-15T12:41:01.284440073Z" level=info msg="connecting to shim fb6eb9e30771a96ea3703c89c55dc806036987edebeabfca2910e6062895d39d" address="unix:///run/containerd/s/ccefcdcd635cc751dd2c51f5c138c057a071f41cd9a743c4e647acb0050a5a7c" protocol=ttrpc version=3 May 15 12:41:01.321313 systemd[1]: Started cri-containerd-fb6eb9e30771a96ea3703c89c55dc806036987edebeabfca2910e6062895d39d.scope - libcontainer container fb6eb9e30771a96ea3703c89c55dc806036987edebeabfca2910e6062895d39d. May 15 12:41:01.383393 containerd[1570]: time="2025-05-15T12:41:01.383348315Z" level=info msg="StartContainer for \"fb6eb9e30771a96ea3703c89c55dc806036987edebeabfca2910e6062895d39d\" returns successfully" May 15 12:41:01.707358 containerd[1570]: time="2025-05-15T12:41:01.707307535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" id:\"ebe7184640aa3ec5e8ed15d351b92d9e88372006c1aaa23a244a7cb933b7ca72\" pid:4936 exited_at:{seconds:1747312861 nanos:707034060}" May 15 12:41:02.073916 kubelet[2830]: I0515 12:41:02.073818 2830 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 12:41:02.073916 kubelet[2830]: I0515 12:41:02.073849 2830 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 12:41:02.290289 kubelet[2830]: I0515 12:41:02.288699 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n6z76" podStartSLOduration=42.141114742 podStartE2EDuration="55.288675093s" podCreationTimestamp="2025-05-15 12:40:07 +0000 UTC" firstStartedPulling="2025-05-15 12:40:48.116394104 +0000 UTC m=+62.353632988" lastFinishedPulling="2025-05-15 12:41:01.263954455 +0000 UTC m=+75.501193339" observedRunningTime="2025-05-15 12:41:02.28770787 +0000 UTC m=+76.524946744" watchObservedRunningTime="2025-05-15 12:41:02.288675093 +0000 UTC m=+76.525913987" May 15 12:41:02.832293 kubelet[2830]: I0515 12:41:02.831958 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:41:02.885174 kubelet[2830]: I0515 12:41:02.884709 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:41:02.886495 containerd[1570]: time="2025-05-15T12:41:02.886376829Z" level=info msg="StopContainer for \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" with timeout 30 (s)" May 15 12:41:02.888088 containerd[1570]: time="2025-05-15T12:41:02.888067319Z" level=info msg="Stop container \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" with signal terminated" May 15 12:41:02.934959 kubelet[2830]: I0515 12:41:02.934771 2830 topology_manager.go:215] "Topology Admit Handler" podUID="5a387ba8-770f-45d6-89df-efc3f6037e49" podNamespace="calico-apiserver" podName="calico-apiserver-794557d677-5xk8c" May 15 12:41:02.948955 systemd[1]: Created slice kubepods-besteffort-pod5a387ba8_770f_45d6_89df_efc3f6037e49.slice - libcontainer container kubepods-besteffort-pod5a387ba8_770f_45d6_89df_efc3f6037e49.slice. May 15 12:41:02.960225 systemd[1]: cri-containerd-fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58.scope: Deactivated successfully. May 15 12:41:02.966906 containerd[1570]: time="2025-05-15T12:41:02.966661792Z" level=info msg="received exit event container_id:\"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" id:\"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" pid:4808 exit_status:1 exited_at:{seconds:1747312862 nanos:965581947}" May 15 12:41:02.969989 containerd[1570]: time="2025-05-15T12:41:02.969939316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" id:\"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" pid:4808 exit_status:1 exited_at:{seconds:1747312862 nanos:965581947}" May 15 12:41:03.007524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58-rootfs.mount: Deactivated successfully. May 15 12:41:03.029916 kubelet[2830]: I0515 12:41:03.029743 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a387ba8-770f-45d6-89df-efc3f6037e49-calico-apiserver-certs\") pod \"calico-apiserver-794557d677-5xk8c\" (UID: \"5a387ba8-770f-45d6-89df-efc3f6037e49\") " pod="calico-apiserver/calico-apiserver-794557d677-5xk8c" May 15 12:41:03.029916 kubelet[2830]: I0515 12:41:03.029832 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgkp\" (UniqueName: \"kubernetes.io/projected/5a387ba8-770f-45d6-89df-efc3f6037e49-kube-api-access-6mgkp\") pod \"calico-apiserver-794557d677-5xk8c\" (UID: \"5a387ba8-770f-45d6-89df-efc3f6037e49\") " pod="calico-apiserver/calico-apiserver-794557d677-5xk8c" May 15 12:41:03.107383 containerd[1570]: time="2025-05-15T12:41:03.107190958Z" level=info msg="StopContainer for \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" returns successfully" May 15 12:41:03.108299 containerd[1570]: time="2025-05-15T12:41:03.108095384Z" level=info msg="StopPodSandbox for \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\"" May 15 12:41:03.108299 containerd[1570]: time="2025-05-15T12:41:03.108204378Z" level=info msg="Container to stop \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:41:03.116452 systemd[1]: cri-containerd-d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d.scope: Deactivated successfully. May 15 12:41:03.118632 containerd[1570]: time="2025-05-15T12:41:03.118442066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" id:\"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" pid:4488 exit_status:137 exited_at:{seconds:1747312863 nanos:117681713}" May 15 12:41:03.165599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d-rootfs.mount: Deactivated successfully. May 15 12:41:03.166505 containerd[1570]: time="2025-05-15T12:41:03.165820401Z" level=info msg="received exit event sandbox_id:\"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" exit_status:137 exited_at:{seconds:1747312863 nanos:117681713}" May 15 12:41:03.169470 containerd[1570]: time="2025-05-15T12:41:03.169422729Z" level=info msg="shim disconnected" id=d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d namespace=k8s.io May 15 12:41:03.169520 containerd[1570]: time="2025-05-15T12:41:03.169474022Z" level=warning msg="cleaning up after shim disconnected" id=d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d namespace=k8s.io May 15 12:41:03.169555 containerd[1570]: time="2025-05-15T12:41:03.169483469Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:41:03.170642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d-shm.mount: Deactivated successfully. May 15 12:41:03.232280 systemd-networkd[1466]: cali1e217452c7d: Link DOWN May 15 12:41:03.232800 systemd-networkd[1466]: cali1e217452c7d: Lost carrier May 15 12:41:03.257026 containerd[1570]: time="2025-05-15T12:41:03.256163710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794557d677-5xk8c,Uid:5a387ba8-770f-45d6-89df-efc3f6037e49,Namespace:calico-apiserver,Attempt:0,}" May 15 12:41:03.284650 kubelet[2830]: I0515 12:41:03.284034 2830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.227 [INFO][5023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.228 [INFO][5023] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" iface="eth0" netns="/var/run/netns/cni-5d81e484-9068-970b-494c-a5cf5dfded8e" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.230 [INFO][5023] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" iface="eth0" netns="/var/run/netns/cni-5d81e484-9068-970b-494c-a5cf5dfded8e" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.239 [INFO][5023] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" after=10.373003ms iface="eth0" netns="/var/run/netns/cni-5d81e484-9068-970b-494c-a5cf5dfded8e" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.239 [INFO][5023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.239 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.292 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.293 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.293 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.333 [INFO][5033] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.334 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.335 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:03.341160 containerd[1570]: 2025-05-15 12:41:03.339 [INFO][5023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:03.342029 containerd[1570]: time="2025-05-15T12:41:03.341988280Z" level=info msg="TearDown network for sandbox \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" successfully" May 15 12:41:03.342117 containerd[1570]: time="2025-05-15T12:41:03.342103822Z" level=info msg="StopPodSandbox for \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" returns successfully" May 15 12:41:03.404225 systemd-networkd[1466]: caliceb949a3987: Link UP May 15 12:41:03.405105 systemd-networkd[1466]: caliceb949a3987: Gained carrier May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.316 [INFO][5042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0 calico-apiserver-794557d677- calico-apiserver 5a387ba8-770f-45d6-89df-efc3f6037e49 1039 0 2025-05-15 12:41:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:794557d677 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-125-189 calico-apiserver-794557d677-5xk8c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliceb949a3987 [] []}} ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-5xk8c" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.316 [INFO][5042] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-5xk8c" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.360 [INFO][5057] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" HandleID="k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Workload="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.370 [INFO][5057] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" HandleID="k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Workload="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-125-189", "pod":"calico-apiserver-794557d677-5xk8c", "timestamp":"2025-05-15 12:41:03.360752511 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.370 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.370 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.370 [INFO][5057] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.373 [INFO][5057] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.377 [INFO][5057] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.382 [INFO][5057] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.384 [INFO][5057] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.386 [INFO][5057] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.386 [INFO][5057] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.388 [INFO][5057] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.391 [INFO][5057] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.397 [INFO][5057] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.136/26] block=192.168.83.128/26 handle="k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.398 [INFO][5057] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.136/26] handle="k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" host="172-236-125-189" May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.398 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:03.417801 containerd[1570]: 2025-05-15 12:41:03.398 [INFO][5057] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.136/26] IPv6=[] ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" HandleID="k8s-pod-network.89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Workload="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" May 15 12:41:03.418310 containerd[1570]: 2025-05-15 12:41:03.400 [INFO][5042] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-5xk8c" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0", GenerateName:"calico-apiserver-794557d677-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a387ba8-770f-45d6-89df-efc3f6037e49", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794557d677", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"calico-apiserver-794557d677-5xk8c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliceb949a3987", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:41:03.418310 containerd[1570]: 2025-05-15 12:41:03.401 [INFO][5042] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.136/32] ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-5xk8c" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" May 15 12:41:03.418310 containerd[1570]: 2025-05-15 12:41:03.401 [INFO][5042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceb949a3987 ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-5xk8c" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" May 15 12:41:03.418310 containerd[1570]: 2025-05-15 12:41:03.403 [INFO][5042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-5xk8c" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" May 15 12:41:03.418310 containerd[1570]: 2025-05-15 12:41:03.403 [INFO][5042] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-5xk8c" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0", GenerateName:"calico-apiserver-794557d677-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a387ba8-770f-45d6-89df-efc3f6037e49", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794557d677", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af", Pod:"calico-apiserver-794557d677-5xk8c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliceb949a3987", MAC:"46:80:33:4e:e2:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:41:03.418310 containerd[1570]: 2025-05-15 12:41:03.410 [INFO][5042] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" Namespace="calico-apiserver" Pod="calico-apiserver-794557d677-5xk8c" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--794557d677--5xk8c-eth0" May 15 12:41:03.447732 containerd[1570]: time="2025-05-15T12:41:03.447680051Z" level=info msg="connecting to shim 89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af" address="unix:///run/containerd/s/6a1127d0f3973450e08abb16bce1ba450a5aee90bb1109278ed0fc78b3275932" namespace=k8s.io protocol=ttrpc version=3 May 15 12:41:03.476120 systemd[1]: Started cri-containerd-89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af.scope - libcontainer container 89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af. May 15 12:41:03.528404 containerd[1570]: time="2025-05-15T12:41:03.528322908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794557d677-5xk8c,Uid:5a387ba8-770f-45d6-89df-efc3f6037e49,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af\"" May 15 12:41:03.533177 containerd[1570]: time="2025-05-15T12:41:03.533068953Z" level=info msg="CreateContainer within sandbox \"89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:41:03.533725 kubelet[2830]: I0515 12:41:03.533397 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f51c33cf-e651-4159-ba52-866ced1779f7-calico-apiserver-certs\") pod \"f51c33cf-e651-4159-ba52-866ced1779f7\" (UID: \"f51c33cf-e651-4159-ba52-866ced1779f7\") " May 15 12:41:03.534079 kubelet[2830]: I0515 12:41:03.533847 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwnjz\" (UniqueName: \"kubernetes.io/projected/f51c33cf-e651-4159-ba52-866ced1779f7-kube-api-access-kwnjz\") pod \"f51c33cf-e651-4159-ba52-866ced1779f7\" (UID: \"f51c33cf-e651-4159-ba52-866ced1779f7\") " May 15 12:41:03.538498 kubelet[2830]: I0515 12:41:03.538461 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51c33cf-e651-4159-ba52-866ced1779f7-kube-api-access-kwnjz" (OuterVolumeSpecName: "kube-api-access-kwnjz") pod "f51c33cf-e651-4159-ba52-866ced1779f7" (UID: "f51c33cf-e651-4159-ba52-866ced1779f7"). InnerVolumeSpecName "kube-api-access-kwnjz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:41:03.540604 kubelet[2830]: I0515 12:41:03.540547 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c33cf-e651-4159-ba52-866ced1779f7-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f51c33cf-e651-4159-ba52-866ced1779f7" (UID: "f51c33cf-e651-4159-ba52-866ced1779f7"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 12:41:03.541600 containerd[1570]: time="2025-05-15T12:41:03.541556530Z" level=info msg="Container 519a3c7acea7ad69108c7452d0efc9163083bf5a5c6abbfb038068397009324e: CDI devices from CRI Config.CDIDevices: []" May 15 12:41:03.546574 containerd[1570]: time="2025-05-15T12:41:03.546534789Z" level=info msg="CreateContainer within sandbox \"89259dcfda5d065523c763cffd9eea3848dbcda9892a28414623cbe028f8b2af\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"519a3c7acea7ad69108c7452d0efc9163083bf5a5c6abbfb038068397009324e\"" May 15 12:41:03.547854 containerd[1570]: time="2025-05-15T12:41:03.547442893Z" level=info msg="StartContainer for \"519a3c7acea7ad69108c7452d0efc9163083bf5a5c6abbfb038068397009324e\"" May 15 12:41:03.550188 containerd[1570]: time="2025-05-15T12:41:03.550150812Z" level=info msg="connecting to shim 519a3c7acea7ad69108c7452d0efc9163083bf5a5c6abbfb038068397009324e" address="unix:///run/containerd/s/6a1127d0f3973450e08abb16bce1ba450a5aee90bb1109278ed0fc78b3275932" protocol=ttrpc version=3 May 15 12:41:03.577143 systemd[1]: Started cri-containerd-519a3c7acea7ad69108c7452d0efc9163083bf5a5c6abbfb038068397009324e.scope - libcontainer container 519a3c7acea7ad69108c7452d0efc9163083bf5a5c6abbfb038068397009324e. May 15 12:41:03.634923 kubelet[2830]: I0515 12:41:03.634893 2830 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f51c33cf-e651-4159-ba52-866ced1779f7-calico-apiserver-certs\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:03.635170 kubelet[2830]: I0515 12:41:03.635152 2830 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kwnjz\" (UniqueName: \"kubernetes.io/projected/f51c33cf-e651-4159-ba52-866ced1779f7-kube-api-access-kwnjz\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:03.636356 containerd[1570]: time="2025-05-15T12:41:03.636220642Z" level=info msg="StartContainer for \"519a3c7acea7ad69108c7452d0efc9163083bf5a5c6abbfb038068397009324e\" returns successfully" May 15 12:41:03.894714 systemd[1]: Removed slice kubepods-besteffort-podf51c33cf_e651_4159_ba52_866ced1779f7.slice - libcontainer container kubepods-besteffort-podf51c33cf_e651_4159_ba52_866ced1779f7.slice. May 15 12:41:04.011354 systemd[1]: run-netns-cni\x2d5d81e484\x2d9068\x2d970b\x2d494c\x2da5cf5dfded8e.mount: Deactivated successfully. May 15 12:41:04.011466 systemd[1]: var-lib-kubelet-pods-f51c33cf\x2de651\x2d4159\x2dba52\x2d866ced1779f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkwnjz.mount: Deactivated successfully. May 15 12:41:04.011539 systemd[1]: var-lib-kubelet-pods-f51c33cf\x2de651\x2d4159\x2dba52\x2d866ced1779f7-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 12:41:04.345137 kubelet[2830]: I0515 12:41:04.343568 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-794557d677-5xk8c" podStartSLOduration=2.343545519 podStartE2EDuration="2.343545519s" podCreationTimestamp="2025-05-15 12:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:41:04.318660169 +0000 UTC m=+78.555899043" watchObservedRunningTime="2025-05-15 12:41:04.343545519 +0000 UTC m=+78.580784393" May 15 12:41:04.396080 containerd[1570]: time="2025-05-15T12:41:04.395954451Z" level=info msg="StopContainer for \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" with timeout 30 (s)" May 15 12:41:04.398445 containerd[1570]: time="2025-05-15T12:41:04.398235365Z" level=info msg="Stop container \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" with signal terminated" May 15 12:41:04.438075 systemd[1]: cri-containerd-d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286.scope: Deactivated successfully. May 15 12:41:04.448211 containerd[1570]: time="2025-05-15T12:41:04.448103354Z" level=info msg="received exit event container_id:\"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" id:\"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" pid:4722 exit_status:1 exited_at:{seconds:1747312864 nanos:447575530}" May 15 12:41:04.448798 containerd[1570]: time="2025-05-15T12:41:04.448532309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" id:\"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" pid:4722 exit_status:1 exited_at:{seconds:1747312864 nanos:447575530}" May 15 12:41:04.502709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286-rootfs.mount: Deactivated successfully. May 15 12:41:04.507713 containerd[1570]: time="2025-05-15T12:41:04.507653514Z" level=info msg="StopContainer for \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" returns successfully" May 15 12:41:04.509797 containerd[1570]: time="2025-05-15T12:41:04.509758913Z" level=info msg="StopPodSandbox for \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\"" May 15 12:41:04.509991 containerd[1570]: time="2025-05-15T12:41:04.509930320Z" level=info msg="Container to stop \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:41:04.524895 systemd[1]: cri-containerd-fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a.scope: Deactivated successfully. May 15 12:41:04.530081 containerd[1570]: time="2025-05-15T12:41:04.530040838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" id:\"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" pid:4285 exit_status:137 exited_at:{seconds:1747312864 nanos:529732845}" May 15 12:41:04.561030 containerd[1570]: time="2025-05-15T12:41:04.560991264Z" level=info msg="shim disconnected" id=fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a namespace=k8s.io May 15 12:41:04.561030 containerd[1570]: time="2025-05-15T12:41:04.561022045Z" level=warning msg="cleaning up after shim disconnected" id=fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a namespace=k8s.io May 15 12:41:04.561190 containerd[1570]: time="2025-05-15T12:41:04.561030762Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:41:04.561658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a-rootfs.mount: Deactivated successfully. May 15 12:41:04.579189 containerd[1570]: time="2025-05-15T12:41:04.579144127Z" level=info msg="received exit event sandbox_id:\"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" exit_status:137 exited_at:{seconds:1747312864 nanos:529732845}" May 15 12:41:04.584150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a-shm.mount: Deactivated successfully. May 15 12:41:04.642565 systemd-networkd[1466]: cali517ff4fc318: Link DOWN May 15 12:41:04.642574 systemd-networkd[1466]: cali517ff4fc318: Lost carrier May 15 12:41:04.708763 containerd[1570]: time="2025-05-15T12:41:04.708569017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" id:\"10dc30d49045a432bb8e7823514c4a339601874dbca9f1c659540fe2b5dbbc37\" pid:5259 exited_at:{seconds:1747312864 nanos:706614261}" May 15 12:41:04.717833 kubelet[2830]: E0515 12:41:04.717810 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.640 [INFO][5240] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.640 [INFO][5240] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" iface="eth0" netns="/var/run/netns/cni-88cb3ee1-ca27-f9bb-5609-f835a6aadac8" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.640 [INFO][5240] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" iface="eth0" netns="/var/run/netns/cni-88cb3ee1-ca27-f9bb-5609-f835a6aadac8" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.653 [INFO][5240] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" after=12.36164ms iface="eth0" netns="/var/run/netns/cni-88cb3ee1-ca27-f9bb-5609-f835a6aadac8" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.653 [INFO][5240] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.653 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.684 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.684 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.684 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.742 [INFO][5273] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.742 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.744 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:04.750380 containerd[1570]: 2025-05-15 12:41:04.747 [INFO][5240] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:04.754226 containerd[1570]: time="2025-05-15T12:41:04.753574132Z" level=info msg="TearDown network for sandbox \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" successfully" May 15 12:41:04.754226 containerd[1570]: time="2025-05-15T12:41:04.753629685Z" level=info msg="StopPodSandbox for \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" returns successfully" May 15 12:41:04.754608 systemd[1]: run-netns-cni\x2d88cb3ee1\x2dca27\x2df9bb\x2d5609\x2df835a6aadac8.mount: Deactivated successfully. May 15 12:41:04.947906 kubelet[2830]: I0515 12:41:04.947786 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxjds\" (UniqueName: \"kubernetes.io/projected/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9-kube-api-access-mxjds\") pod \"28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9\" (UID: \"28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9\") " May 15 12:41:04.949204 kubelet[2830]: I0515 12:41:04.948181 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9-calico-apiserver-certs\") pod \"28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9\" (UID: \"28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9\") " May 15 12:41:04.952530 kubelet[2830]: I0515 12:41:04.952488 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9-kube-api-access-mxjds" (OuterVolumeSpecName: "kube-api-access-mxjds") pod "28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9" (UID: "28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9"). InnerVolumeSpecName "kube-api-access-mxjds". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:41:04.952666 kubelet[2830]: I0515 12:41:04.952649 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9" (UID: "28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 12:41:05.006346 systemd[1]: var-lib-kubelet-pods-28308a9a\x2d6a5a\x2d4c07\x2db05d\x2d23fd8cc4a3e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxjds.mount: Deactivated successfully. May 15 12:41:05.006473 systemd[1]: var-lib-kubelet-pods-28308a9a\x2d6a5a\x2d4c07\x2db05d\x2d23fd8cc4a3e9-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 12:41:05.053255 kubelet[2830]: I0515 12:41:05.053177 2830 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mxjds\" (UniqueName: \"kubernetes.io/projected/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9-kube-api-access-mxjds\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:05.053255 kubelet[2830]: I0515 12:41:05.053215 2830 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9-calico-apiserver-certs\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:05.270200 systemd-networkd[1466]: caliceb949a3987: Gained IPv6LL May 15 12:41:05.294307 kubelet[2830]: I0515 12:41:05.294262 2830 scope.go:117] "RemoveContainer" containerID="d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286" May 15 12:41:05.298005 containerd[1570]: time="2025-05-15T12:41:05.297955396Z" level=info msg="RemoveContainer for \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\"" May 15 12:41:05.304025 systemd[1]: Removed slice kubepods-besteffort-pod28308a9a_6a5a_4c07_b05d_23fd8cc4a3e9.slice - libcontainer container kubepods-besteffort-pod28308a9a_6a5a_4c07_b05d_23fd8cc4a3e9.slice. May 15 12:41:05.307260 containerd[1570]: time="2025-05-15T12:41:05.307227251Z" level=info msg="RemoveContainer for \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" returns successfully" May 15 12:41:05.307535 kubelet[2830]: I0515 12:41:05.307520 2830 scope.go:117] "RemoveContainer" containerID="d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286" May 15 12:41:05.307790 containerd[1570]: time="2025-05-15T12:41:05.307761560Z" level=error msg="ContainerStatus for \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\": not found" May 15 12:41:05.307943 kubelet[2830]: E0515 12:41:05.307895 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\": not found" containerID="d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286" May 15 12:41:05.308194 kubelet[2830]: I0515 12:41:05.307931 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286"} err="failed to get container status \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\": rpc error: code = NotFound desc = an error occurred when try to find container \"d91ab86bb90f18d9866c043f1dbd61b66b239c3beeab5fd2a54288ab2a972286\": not found" May 15 12:41:05.912852 kubelet[2830]: I0515 12:41:05.912051 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9" path="/var/lib/kubelet/pods/28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9/volumes" May 15 12:41:05.913701 kubelet[2830]: I0515 12:41:05.913638 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51c33cf-e651-4159-ba52-866ced1779f7" path="/var/lib/kubelet/pods/f51c33cf-e651-4159-ba52-866ced1779f7/volumes" May 15 12:41:08.750943 containerd[1570]: time="2025-05-15T12:41:08.750900247Z" level=info msg="StopContainer for \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" with timeout 300 (s)" May 15 12:41:08.751865 containerd[1570]: time="2025-05-15T12:41:08.751523998Z" level=info msg="Stop container \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" with signal terminated" May 15 12:41:09.020032 containerd[1570]: time="2025-05-15T12:41:09.019788699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" id:\"2dc62e0b96e7e4adba8ea5c06cb26796f9a39d9076e850fb37d4bfc59db2b379\" pid:5314 exited_at:{seconds:1747312869 nanos:18592620}" May 15 12:41:09.022570 containerd[1570]: time="2025-05-15T12:41:09.022545640Z" level=info msg="StopContainer for \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" with timeout 5 (s)" May 15 12:41:09.022844 containerd[1570]: time="2025-05-15T12:41:09.022808652Z" level=info msg="Stop container \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" with signal terminated" May 15 12:41:09.051278 systemd[1]: cri-containerd-f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780.scope: Deactivated successfully. May 15 12:41:09.051656 systemd[1]: cri-containerd-f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780.scope: Consumed 1.881s CPU time, 153M memory peak, 656K written to disk. May 15 12:41:09.054474 containerd[1570]: time="2025-05-15T12:41:09.054281040Z" level=info msg="received exit event container_id:\"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" id:\"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" pid:3753 exited_at:{seconds:1747312869 nanos:53562157}" May 15 12:41:09.054659 containerd[1570]: time="2025-05-15T12:41:09.054642566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" id:\"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" pid:3753 exited_at:{seconds:1747312869 nanos:53562157}" May 15 12:41:09.081494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780-rootfs.mount: Deactivated successfully. May 15 12:41:09.091453 containerd[1570]: time="2025-05-15T12:41:09.091408025Z" level=info msg="StopContainer for \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" returns successfully" May 15 12:41:09.092576 containerd[1570]: time="2025-05-15T12:41:09.092556965Z" level=info msg="StopPodSandbox for \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\"" May 15 12:41:09.092741 containerd[1570]: time="2025-05-15T12:41:09.092725461Z" level=info msg="Container to stop \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:41:09.092800 containerd[1570]: time="2025-05-15T12:41:09.092788025Z" level=info msg="Container to stop \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:41:09.092847 containerd[1570]: time="2025-05-15T12:41:09.092836462Z" level=info msg="Container to stop \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:41:09.109764 systemd[1]: cri-containerd-8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf.scope: Deactivated successfully. May 15 12:41:09.114684 containerd[1570]: time="2025-05-15T12:41:09.114640031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" id:\"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" pid:3288 exit_status:137 exited_at:{seconds:1747312869 nanos:112808338}" May 15 12:41:09.122205 containerd[1570]: time="2025-05-15T12:41:09.122048750Z" level=info msg="StopContainer for \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" with timeout 30 (s)" May 15 12:41:09.124183 containerd[1570]: time="2025-05-15T12:41:09.123961701Z" level=info msg="Stop container \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" with signal terminated" May 15 12:41:09.160569 systemd[1]: cri-containerd-bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884.scope: Deactivated successfully. May 15 12:41:09.166826 containerd[1570]: time="2025-05-15T12:41:09.166677260Z" level=info msg="received exit event container_id:\"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" id:\"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" pid:4774 exit_status:2 exited_at:{seconds:1747312869 nanos:166425465}" May 15 12:41:09.171334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf-rootfs.mount: Deactivated successfully. May 15 12:41:09.175324 containerd[1570]: time="2025-05-15T12:41:09.175299953Z" level=info msg="shim disconnected" id=8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf namespace=k8s.io May 15 12:41:09.175324 containerd[1570]: time="2025-05-15T12:41:09.175324976Z" level=warning msg="cleaning up after shim disconnected" id=8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf namespace=k8s.io May 15 12:41:09.175455 containerd[1570]: time="2025-05-15T12:41:09.175333554Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:41:09.196120 containerd[1570]: time="2025-05-15T12:41:09.196085426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" id:\"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" pid:4774 exit_status:2 exited_at:{seconds:1747312869 nanos:166425465}" May 15 12:41:09.198167 containerd[1570]: time="2025-05-15T12:41:09.198140211Z" level=info msg="received exit event sandbox_id:\"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" exit_status:137 exited_at:{seconds:1747312869 nanos:112808338}" May 15 12:41:09.200279 containerd[1570]: time="2025-05-15T12:41:09.200101470Z" level=info msg="TearDown network for sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" successfully" May 15 12:41:09.200279 containerd[1570]: time="2025-05-15T12:41:09.200126173Z" level=info msg="StopPodSandbox for \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" returns successfully" May 15 12:41:09.202189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf-shm.mount: Deactivated successfully. May 15 12:41:09.218603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884-rootfs.mount: Deactivated successfully. May 15 12:41:09.227375 containerd[1570]: time="2025-05-15T12:41:09.227297952Z" level=info msg="StopContainer for \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" returns successfully" May 15 12:41:09.229626 containerd[1570]: time="2025-05-15T12:41:09.229604791Z" level=info msg="StopPodSandbox for \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\"" May 15 12:41:09.230498 containerd[1570]: time="2025-05-15T12:41:09.230472975Z" level=info msg="Container to stop \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:41:09.246225 systemd[1]: cri-containerd-5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9.scope: Deactivated successfully. May 15 12:41:09.250839 containerd[1570]: time="2025-05-15T12:41:09.250351384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" id:\"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" pid:4439 exit_status:137 exited_at:{seconds:1747312869 nanos:250040955}" May 15 12:41:09.256674 kubelet[2830]: I0515 12:41:09.255944 2830 topology_manager.go:215] "Topology Admit Handler" podUID="c61ff59e-722a-49e6-9f11-51b6b8ef3cf9" podNamespace="calico-system" podName="calico-node-f4w5w" May 15 12:41:09.262784 kubelet[2830]: E0515 12:41:09.262741 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" containerName="calico-node" May 15 12:41:09.263666 kubelet[2830]: E0515 12:41:09.263020 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9" containerName="calico-apiserver" May 15 12:41:09.263666 kubelet[2830]: E0515 12:41:09.263035 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f51c33cf-e651-4159-ba52-866ced1779f7" containerName="calico-apiserver" May 15 12:41:09.263666 kubelet[2830]: E0515 12:41:09.263042 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" containerName="flexvol-driver" May 15 12:41:09.263666 kubelet[2830]: E0515 12:41:09.263048 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" containerName="install-cni" May 15 12:41:09.263666 kubelet[2830]: I0515 12:41:09.263155 2830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" containerName="calico-node" May 15 12:41:09.263666 kubelet[2830]: I0515 12:41:09.263164 2830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51c33cf-e651-4159-ba52-866ced1779f7" containerName="calico-apiserver" May 15 12:41:09.263666 kubelet[2830]: I0515 12:41:09.263170 2830 memory_manager.go:354] "RemoveStaleState removing state" podUID="28308a9a-6a5a-4c07-b05d-23fd8cc4a3e9" containerName="calico-apiserver" May 15 12:41:09.276499 systemd[1]: Created slice kubepods-besteffort-podc61ff59e_722a_49e6_9f11_51b6b8ef3cf9.slice - libcontainer container kubepods-besteffort-podc61ff59e_722a_49e6_9f11_51b6b8ef3cf9.slice. May 15 12:41:09.306075 containerd[1570]: time="2025-05-15T12:41:09.305805783Z" level=info msg="shim disconnected" id=5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9 namespace=k8s.io May 15 12:41:09.306075 containerd[1570]: time="2025-05-15T12:41:09.305840424Z" level=warning msg="cleaning up after shim disconnected" id=5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9 namespace=k8s.io May 15 12:41:09.306075 containerd[1570]: time="2025-05-15T12:41:09.305850012Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:41:09.309872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9-rootfs.mount: Deactivated successfully. May 15 12:41:09.321477 kubelet[2830]: I0515 12:41:09.321369 2830 scope.go:117] "RemoveContainer" containerID="f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780" May 15 12:41:09.325279 containerd[1570]: time="2025-05-15T12:41:09.325237799Z" level=info msg="RemoveContainer for \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\"" May 15 12:41:09.333008 containerd[1570]: time="2025-05-15T12:41:09.332931884Z" level=info msg="RemoveContainer for \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" returns successfully" May 15 12:41:09.333301 kubelet[2830]: I0515 12:41:09.333240 2830 scope.go:117] "RemoveContainer" containerID="023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39" May 15 12:41:09.338010 containerd[1570]: time="2025-05-15T12:41:09.337093940Z" level=info msg="RemoveContainer for \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\"" May 15 12:41:09.339171 containerd[1570]: time="2025-05-15T12:41:09.339148804Z" level=info msg="received exit event sandbox_id:\"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" exit_status:137 exited_at:{seconds:1747312869 nanos:250040955}" May 15 12:41:09.344122 containerd[1570]: time="2025-05-15T12:41:09.344101173Z" level=info msg="RemoveContainer for \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\" returns successfully" May 15 12:41:09.344410 kubelet[2830]: I0515 12:41:09.344385 2830 scope.go:117] "RemoveContainer" containerID="de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927" May 15 12:41:09.346635 containerd[1570]: time="2025-05-15T12:41:09.346616388Z" level=info msg="RemoveContainer for \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\"" May 15 12:41:09.354942 containerd[1570]: time="2025-05-15T12:41:09.354906808Z" level=info msg="RemoveContainer for \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\" returns successfully" May 15 12:41:09.355338 kubelet[2830]: I0515 12:41:09.355322 2830 scope.go:117] "RemoveContainer" containerID="f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780" May 15 12:41:09.355683 containerd[1570]: time="2025-05-15T12:41:09.355600547Z" level=error msg="ContainerStatus for \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\": not found" May 15 12:41:09.355874 kubelet[2830]: E0515 12:41:09.355832 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\": not found" containerID="f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780" May 15 12:41:09.356105 kubelet[2830]: I0515 12:41:09.356009 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780"} err="failed to get container status \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0ca9e65364680a36033b88ab53d62c415bfd51711cfa7ce9f49e930a8b0e780\": not found" May 15 12:41:09.356105 kubelet[2830]: I0515 12:41:09.356050 2830 scope.go:117] "RemoveContainer" containerID="023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39" May 15 12:41:09.356281 containerd[1570]: time="2025-05-15T12:41:09.356200210Z" level=error msg="ContainerStatus for \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\": not found" May 15 12:41:09.356411 kubelet[2830]: E0515 12:41:09.356328 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\": not found" containerID="023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39" May 15 12:41:09.356446 kubelet[2830]: I0515 12:41:09.356405 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39"} err="failed to get container status \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\": rpc error: code = NotFound desc = an error occurred when try to find container \"023dbd80477d1b39889f2ae55c701472b18d3fee0fd23fbacb4f0c2e288a2a39\": not found" May 15 12:41:09.356446 kubelet[2830]: I0515 12:41:09.356422 2830 scope.go:117] "RemoveContainer" containerID="de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927" May 15 12:41:09.356631 containerd[1570]: time="2025-05-15T12:41:09.356561666Z" level=error msg="ContainerStatus for \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\": not found" May 15 12:41:09.356873 kubelet[2830]: E0515 12:41:09.356806 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\": not found" containerID="de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927" May 15 12:41:09.356910 kubelet[2830]: I0515 12:41:09.356877 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927"} err="failed to get container status \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\": rpc error: code = NotFound desc = an error occurred when try to find container \"de6661dc1e0b2d28a9136bc593f19588e93422143bc78cac0cbf7ab7a54aa927\": not found" May 15 12:41:09.381295 kubelet[2830]: I0515 12:41:09.381203 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-policysync\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381295 kubelet[2830]: I0515 12:41:09.381245 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-xtables-lock\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381295 kubelet[2830]: I0515 12:41:09.381264 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-flexvol-driver-host\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381295 kubelet[2830]: I0515 12:41:09.381294 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-node-certs\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381295 kubelet[2830]: I0515 12:41:09.381309 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-log-dir\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381545 kubelet[2830]: I0515 12:41:09.381326 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-bin-dir\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381545 kubelet[2830]: I0515 12:41:09.381339 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-lib-modules\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381545 kubelet[2830]: I0515 12:41:09.381355 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-var-lib-calico\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381545 kubelet[2830]: I0515 12:41:09.381373 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs7h6\" (UniqueName: \"kubernetes.io/projected/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-kube-api-access-cs7h6\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381545 kubelet[2830]: I0515 12:41:09.381393 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-var-run-calico\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381545 kubelet[2830]: I0515 12:41:09.381469 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-tigera-ca-bundle\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381675 kubelet[2830]: I0515 12:41:09.381485 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-net-dir\") pod \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\" (UID: \"3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8\") " May 15 12:41:09.381675 kubelet[2830]: I0515 12:41:09.381552 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-var-run-calico\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381675 kubelet[2830]: I0515 12:41:09.381576 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-cni-log-dir\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381675 kubelet[2830]: I0515 12:41:09.381595 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzxm\" (UniqueName: \"kubernetes.io/projected/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-kube-api-access-brzxm\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381675 kubelet[2830]: I0515 12:41:09.381611 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-node-certs\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381788 kubelet[2830]: I0515 12:41:09.381626 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-flexvol-driver-host\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381788 kubelet[2830]: I0515 12:41:09.381647 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-lib-modules\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381788 kubelet[2830]: I0515 12:41:09.381677 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-tigera-ca-bundle\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381788 kubelet[2830]: I0515 12:41:09.381697 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-var-lib-calico\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381788 kubelet[2830]: I0515 12:41:09.381709 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-cni-bin-dir\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381897 kubelet[2830]: I0515 12:41:09.381723 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-xtables-lock\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381897 kubelet[2830]: I0515 12:41:09.381734 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-policysync\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381897 kubelet[2830]: I0515 12:41:09.381748 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c61ff59e-722a-49e6-9f11-51b6b8ef3cf9-cni-net-dir\") pod \"calico-node-f4w5w\" (UID: \"c61ff59e-722a-49e6-9f11-51b6b8ef3cf9\") " pod="calico-system/calico-node-f4w5w" May 15 12:41:09.381897 kubelet[2830]: I0515 12:41:09.381844 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-policysync" (OuterVolumeSpecName: "policysync") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.381897 kubelet[2830]: I0515 12:41:09.381875 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.382024 kubelet[2830]: I0515 12:41:09.381891 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.382285 kubelet[2830]: I0515 12:41:09.382257 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.382351 kubelet[2830]: I0515 12:41:09.382289 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.382351 kubelet[2830]: I0515 12:41:09.382318 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.382351 kubelet[2830]: I0515 12:41:09.382335 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.384538 kubelet[2830]: I0515 12:41:09.384488 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.385558 kubelet[2830]: I0515 12:41:09.385124 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:41:09.389578 kubelet[2830]: I0515 12:41:09.389073 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-kube-api-access-cs7h6" (OuterVolumeSpecName: "kube-api-access-cs7h6") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "kube-api-access-cs7h6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:41:09.391803 kubelet[2830]: I0515 12:41:09.391329 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-node-certs" (OuterVolumeSpecName: "node-certs") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 12:41:09.394133 kubelet[2830]: I0515 12:41:09.394110 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" (UID: "3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 12:41:09.411529 systemd-networkd[1466]: caliee8ac216aca: Link DOWN May 15 12:41:09.411543 systemd-networkd[1466]: caliee8ac216aca: Lost carrier May 15 12:41:09.482440 kubelet[2830]: I0515 12:41:09.482326 2830 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cs7h6\" (UniqueName: \"kubernetes.io/projected/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-kube-api-access-cs7h6\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482440 kubelet[2830]: I0515 12:41:09.482359 2830 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-lib-modules\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482440 kubelet[2830]: I0515 12:41:09.482368 2830 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-net-dir\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482440 kubelet[2830]: I0515 12:41:09.482375 2830 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-policysync\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482440 kubelet[2830]: I0515 12:41:09.482391 2830 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-log-dir\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482440 kubelet[2830]: I0515 12:41:09.482398 2830 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-cni-bin-dir\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482440 kubelet[2830]: I0515 12:41:09.482406 2830 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-var-lib-calico\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482440 kubelet[2830]: I0515 12:41:09.482413 2830 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-var-run-calico\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482730 kubelet[2830]: I0515 12:41:09.482420 2830 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-tigera-ca-bundle\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482730 kubelet[2830]: I0515 12:41:09.482427 2830 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-xtables-lock\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482730 kubelet[2830]: I0515 12:41:09.482434 2830 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-flexvol-driver-host\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.482730 kubelet[2830]: I0515 12:41:09.482442 2830 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8-node-certs\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.409 [INFO][5459] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.409 [INFO][5459] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" iface="eth0" netns="/var/run/netns/cni-3e606e06-18a6-5198-7a34-9b02226fab1e" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.410 [INFO][5459] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" iface="eth0" netns="/var/run/netns/cni-3e606e06-18a6-5198-7a34-9b02226fab1e" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.417 [INFO][5459] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" after=7.221998ms iface="eth0" netns="/var/run/netns/cni-3e606e06-18a6-5198-7a34-9b02226fab1e" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.417 [INFO][5459] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.417 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.455 [INFO][5470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.455 [INFO][5470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.455 [INFO][5470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.481 [INFO][5470] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.481 [INFO][5470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.483 [INFO][5470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:09.490023 containerd[1570]: 2025-05-15 12:41:09.486 [INFO][5459] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:09.490694 containerd[1570]: time="2025-05-15T12:41:09.490121051Z" level=info msg="TearDown network for sandbox \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" successfully" May 15 12:41:09.490694 containerd[1570]: time="2025-05-15T12:41:09.490256026Z" level=info msg="StopPodSandbox for \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" returns successfully" May 15 12:41:09.583084 kubelet[2830]: E0515 12:41:09.582923 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:09.584988 containerd[1570]: time="2025-05-15T12:41:09.584777444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f4w5w,Uid:c61ff59e-722a-49e6-9f11-51b6b8ef3cf9,Namespace:calico-system,Attempt:0,}" May 15 12:41:09.601003 containerd[1570]: time="2025-05-15T12:41:09.600794200Z" level=info msg="connecting to shim fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b" address="unix:///run/containerd/s/5eceda3b09bea853886d49d2a6de198f765604f66b7baf5efa7c6b12111073da" namespace=k8s.io protocol=ttrpc version=3 May 15 12:41:09.624149 systemd[1]: Started cri-containerd-fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b.scope - libcontainer container fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b. May 15 12:41:09.630156 systemd[1]: Removed slice kubepods-besteffort-pod3e0fb5f0_ddfc_4022_865c_cb2de4ca62e8.slice - libcontainer container kubepods-besteffort-pod3e0fb5f0_ddfc_4022_865c_cb2de4ca62e8.slice. May 15 12:41:09.630401 systemd[1]: kubepods-besteffort-pod3e0fb5f0_ddfc_4022_865c_cb2de4ca62e8.slice: Consumed 3.622s CPU time, 298.1M memory peak, 161.1M written to disk. May 15 12:41:09.660960 containerd[1570]: time="2025-05-15T12:41:09.660917142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f4w5w,Uid:c61ff59e-722a-49e6-9f11-51b6b8ef3cf9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b\"" May 15 12:41:09.661697 kubelet[2830]: E0515 12:41:09.661677 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:09.663807 containerd[1570]: time="2025-05-15T12:41:09.663778916Z" level=info msg="CreateContainer within sandbox \"fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 12:41:09.671001 containerd[1570]: time="2025-05-15T12:41:09.670961414Z" level=info msg="Container 87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b: CDI devices from CRI Config.CDIDevices: []" May 15 12:41:09.676715 containerd[1570]: time="2025-05-15T12:41:09.676692891Z" level=info msg="CreateContainer within sandbox \"fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b\"" May 15 12:41:09.678023 containerd[1570]: time="2025-05-15T12:41:09.678004729Z" level=info msg="StartContainer for \"87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b\"" May 15 12:41:09.680329 containerd[1570]: time="2025-05-15T12:41:09.680282615Z" level=info msg="connecting to shim 87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b" address="unix:///run/containerd/s/5eceda3b09bea853886d49d2a6de198f765604f66b7baf5efa7c6b12111073da" protocol=ttrpc version=3 May 15 12:41:09.683996 kubelet[2830]: I0515 12:41:09.683128 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d-tigera-ca-bundle\") pod \"b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d\" (UID: \"b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d\") " May 15 12:41:09.683996 kubelet[2830]: I0515 12:41:09.683166 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5ll4\" (UniqueName: \"kubernetes.io/projected/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d-kube-api-access-j5ll4\") pod \"b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d\" (UID: \"b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d\") " May 15 12:41:09.691405 kubelet[2830]: I0515 12:41:09.691376 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d-kube-api-access-j5ll4" (OuterVolumeSpecName: "kube-api-access-j5ll4") pod "b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d" (UID: "b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d"). InnerVolumeSpecName "kube-api-access-j5ll4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:41:09.692601 kubelet[2830]: I0515 12:41:09.692580 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d" (UID: "b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 12:41:09.717116 systemd[1]: Started cri-containerd-87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b.scope - libcontainer container 87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b. May 15 12:41:09.749275 systemd[1]: cri-containerd-ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a.scope: Deactivated successfully. May 15 12:41:09.753729 containerd[1570]: time="2025-05-15T12:41:09.753502954Z" level=info msg="received exit event container_id:\"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" id:\"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" pid:3424 exit_status:1 exited_at:{seconds:1747312869 nanos:752802587}" May 15 12:41:09.754214 containerd[1570]: time="2025-05-15T12:41:09.753666672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" id:\"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" pid:3424 exit_status:1 exited_at:{seconds:1747312869 nanos:752802587}" May 15 12:41:09.762266 containerd[1570]: time="2025-05-15T12:41:09.762195279Z" level=info msg="StartContainer for \"87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b\" returns successfully" May 15 12:41:09.784103 kubelet[2830]: I0515 12:41:09.784054 2830 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d-tigera-ca-bundle\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.784517 kubelet[2830]: I0515 12:41:09.784454 2830 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j5ll4\" (UniqueName: \"kubernetes.io/projected/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d-kube-api-access-j5ll4\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:09.807653 systemd[1]: cri-containerd-87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b.scope: Deactivated successfully. May 15 12:41:09.808551 systemd[1]: cri-containerd-87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b.scope: Consumed 43ms CPU time, 8M memory peak, 6.3M written to disk. May 15 12:41:09.810781 containerd[1570]: time="2025-05-15T12:41:09.810745437Z" level=info msg="received exit event container_id:\"87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b\" id:\"87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b\" pid:5539 exited_at:{seconds:1747312869 nanos:809947185}" May 15 12:41:09.812154 containerd[1570]: time="2025-05-15T12:41:09.811121729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b\" id:\"87eb9b29dce78dc843e0300995cfeada920057ead3f89e4d2931e6b4b3a6475b\" pid:5539 exited_at:{seconds:1747312869 nanos:809947185}" May 15 12:41:09.812725 containerd[1570]: time="2025-05-15T12:41:09.812673375Z" level=info msg="StopContainer for \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" returns successfully" May 15 12:41:09.813936 containerd[1570]: time="2025-05-15T12:41:09.813895106Z" level=info msg="StopPodSandbox for \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\"" May 15 12:41:09.814082 containerd[1570]: time="2025-05-15T12:41:09.814065862Z" level=info msg="Container to stop \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:41:09.823132 systemd[1]: cri-containerd-f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a.scope: Deactivated successfully. May 15 12:41:09.825620 containerd[1570]: time="2025-05-15T12:41:09.825545980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" id:\"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" pid:3337 exit_status:137 exited_at:{seconds:1747312869 nanos:825360098}" May 15 12:41:09.873078 containerd[1570]: time="2025-05-15T12:41:09.872652524Z" level=info msg="shim disconnected" id=f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a namespace=k8s.io May 15 12:41:09.873078 containerd[1570]: time="2025-05-15T12:41:09.872681727Z" level=warning msg="cleaning up after shim disconnected" id=f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a namespace=k8s.io May 15 12:41:09.873078 containerd[1570]: time="2025-05-15T12:41:09.872689485Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:41:09.887372 containerd[1570]: time="2025-05-15T12:41:09.887322301Z" level=info msg="received exit event sandbox_id:\"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" exit_status:137 exited_at:{seconds:1747312869 nanos:825360098}" May 15 12:41:09.888289 containerd[1570]: time="2025-05-15T12:41:09.888091621Z" level=info msg="TearDown network for sandbox \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" successfully" May 15 12:41:09.888289 containerd[1570]: time="2025-05-15T12:41:09.888115345Z" level=info msg="StopPodSandbox for \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" returns successfully" May 15 12:41:09.896661 kubelet[2830]: I0515 12:41:09.896605 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8" path="/var/lib/kubelet/pods/3e0fb5f0-ddfc-4022-865c-cb2de4ca62e8/volumes" May 15 12:41:09.904355 systemd[1]: Removed slice kubepods-besteffort-podb4d5a2a6_1051_40f7_84e4_ce0a66d4b74d.slice - libcontainer container kubepods-besteffort-podb4d5a2a6_1051_40f7_84e4_ce0a66d4b74d.slice. May 15 12:41:09.929014 kubelet[2830]: I0515 12:41:09.928728 2830 topology_manager.go:215] "Topology Admit Handler" podUID="a945b43a-3538-4da2-a2be-980ba50aab2d" podNamespace="calico-system" podName="calico-typha-6f8f4784b9-zjdjm" May 15 12:41:09.929529 kubelet[2830]: E0515 12:41:09.929510 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d" containerName="calico-kube-controllers" May 15 12:41:09.929529 kubelet[2830]: E0515 12:41:09.929529 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef32b572-5c1d-422f-80de-3b16fb8fb7b4" containerName="calico-typha" May 15 12:41:09.929623 kubelet[2830]: I0515 12:41:09.929558 2830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef32b572-5c1d-422f-80de-3b16fb8fb7b4" containerName="calico-typha" May 15 12:41:09.929623 kubelet[2830]: I0515 12:41:09.929564 2830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d" containerName="calico-kube-controllers" May 15 12:41:09.936921 systemd[1]: Created slice kubepods-besteffort-poda945b43a_3538_4da2_a2be_980ba50aab2d.slice - libcontainer container kubepods-besteffort-poda945b43a_3538_4da2_a2be_980ba50aab2d.slice. May 15 12:41:10.083216 systemd[1]: var-lib-kubelet-pods-b4d5a2a6\x2d1051\x2d40f7\x2d84e4\x2dce0a66d4b74d-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 15 12:41:10.083372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9-shm.mount: Deactivated successfully. May 15 12:41:10.083450 systemd[1]: run-netns-cni\x2d3e606e06\x2d18a6\x2d5198\x2d7a34\x2d9b02226fab1e.mount: Deactivated successfully. May 15 12:41:10.083526 systemd[1]: var-lib-kubelet-pods-3e0fb5f0\x2dddfc\x2d4022\x2d865c\x2dcb2de4ca62e8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 15 12:41:10.083614 systemd[1]: var-lib-kubelet-pods-b4d5a2a6\x2d1051\x2d40f7\x2d84e4\x2dce0a66d4b74d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj5ll4.mount: Deactivated successfully. May 15 12:41:10.083688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a-rootfs.mount: Deactivated successfully. May 15 12:41:10.083758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a-rootfs.mount: Deactivated successfully. May 15 12:41:10.083822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a-shm.mount: Deactivated successfully. May 15 12:41:10.083896 systemd[1]: var-lib-kubelet-pods-3e0fb5f0\x2dddfc\x2d4022\x2d865c\x2dcb2de4ca62e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcs7h6.mount: Deactivated successfully. May 15 12:41:10.084000 systemd[1]: var-lib-kubelet-pods-3e0fb5f0\x2dddfc\x2d4022\x2d865c\x2dcb2de4ca62e8-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 15 12:41:10.089984 kubelet[2830]: I0515 12:41:10.086225 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnfz6\" (UniqueName: \"kubernetes.io/projected/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-kube-api-access-wnfz6\") pod \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\" (UID: \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\") " May 15 12:41:10.089984 kubelet[2830]: I0515 12:41:10.086279 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-tigera-ca-bundle\") pod \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\" (UID: \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\") " May 15 12:41:10.089984 kubelet[2830]: I0515 12:41:10.086306 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-typha-certs\") pod \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\" (UID: \"ef32b572-5c1d-422f-80de-3b16fb8fb7b4\") " May 15 12:41:10.089984 kubelet[2830]: I0515 12:41:10.086377 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a945b43a-3538-4da2-a2be-980ba50aab2d-tigera-ca-bundle\") pod \"calico-typha-6f8f4784b9-zjdjm\" (UID: \"a945b43a-3538-4da2-a2be-980ba50aab2d\") " pod="calico-system/calico-typha-6f8f4784b9-zjdjm" May 15 12:41:10.089984 kubelet[2830]: I0515 12:41:10.086397 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a945b43a-3538-4da2-a2be-980ba50aab2d-typha-certs\") pod \"calico-typha-6f8f4784b9-zjdjm\" (UID: \"a945b43a-3538-4da2-a2be-980ba50aab2d\") " pod="calico-system/calico-typha-6f8f4784b9-zjdjm" May 15 12:41:10.090188 kubelet[2830]: I0515 12:41:10.086420 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6jcs\" (UniqueName: \"kubernetes.io/projected/a945b43a-3538-4da2-a2be-980ba50aab2d-kube-api-access-x6jcs\") pod \"calico-typha-6f8f4784b9-zjdjm\" (UID: \"a945b43a-3538-4da2-a2be-980ba50aab2d\") " pod="calico-system/calico-typha-6f8f4784b9-zjdjm" May 15 12:41:10.099161 systemd[1]: var-lib-kubelet-pods-ef32b572\x2d5c1d\x2d422f\x2d80de\x2d3b16fb8fb7b4-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 15 12:41:10.100559 kubelet[2830]: I0515 12:41:10.100147 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "ef32b572-5c1d-422f-80de-3b16fb8fb7b4" (UID: "ef32b572-5c1d-422f-80de-3b16fb8fb7b4"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 12:41:10.101658 kubelet[2830]: I0515 12:41:10.101629 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-kube-api-access-wnfz6" (OuterVolumeSpecName: "kube-api-access-wnfz6") pod "ef32b572-5c1d-422f-80de-3b16fb8fb7b4" (UID: "ef32b572-5c1d-422f-80de-3b16fb8fb7b4"). InnerVolumeSpecName "kube-api-access-wnfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:41:10.104906 systemd[1]: var-lib-kubelet-pods-ef32b572\x2d5c1d\x2d422f\x2d80de\x2d3b16fb8fb7b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwnfz6.mount: Deactivated successfully. May 15 12:41:10.106789 kubelet[2830]: I0515 12:41:10.106492 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ef32b572-5c1d-422f-80de-3b16fb8fb7b4" (UID: "ef32b572-5c1d-422f-80de-3b16fb8fb7b4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 12:41:10.108284 systemd[1]: var-lib-kubelet-pods-ef32b572\x2d5c1d\x2d422f\x2d80de\x2d3b16fb8fb7b4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 15 12:41:10.187996 kubelet[2830]: I0515 12:41:10.187913 2830 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-tigera-ca-bundle\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:10.188139 kubelet[2830]: I0515 12:41:10.188086 2830 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-typha-certs\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:10.188139 kubelet[2830]: I0515 12:41:10.188099 2830 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wnfz6\" (UniqueName: \"kubernetes.io/projected/ef32b572-5c1d-422f-80de-3b16fb8fb7b4-kube-api-access-wnfz6\") on node \"172-236-125-189\" DevicePath \"\"" May 15 12:41:10.240658 kubelet[2830]: E0515 12:41:10.240616 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:10.241712 containerd[1570]: time="2025-05-15T12:41:10.241625201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f8f4784b9-zjdjm,Uid:a945b43a-3538-4da2-a2be-980ba50aab2d,Namespace:calico-system,Attempt:0,}" May 15 12:41:10.257560 containerd[1570]: time="2025-05-15T12:41:10.257463277Z" level=info msg="connecting to shim 9034d88ce754850e93d0772527b068551646dc9df59c3ac900eaf8eb12683267" address="unix:///run/containerd/s/fb8f4ab1ac846f6bd197cb5c2bc03cf34f90e2e1d9ef302636632db6500c9d7f" namespace=k8s.io protocol=ttrpc version=3 May 15 12:41:10.284122 systemd[1]: Started cri-containerd-9034d88ce754850e93d0772527b068551646dc9df59c3ac900eaf8eb12683267.scope - libcontainer container 9034d88ce754850e93d0772527b068551646dc9df59c3ac900eaf8eb12683267. May 15 12:41:10.326248 kubelet[2830]: I0515 12:41:10.326186 2830 scope.go:117] "RemoveContainer" containerID="ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a" May 15 12:41:10.335241 kubelet[2830]: E0515 12:41:10.335217 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:10.339809 systemd[1]: Removed slice kubepods-besteffort-podef32b572_5c1d_422f_80de_3b16fb8fb7b4.slice - libcontainer container kubepods-besteffort-podef32b572_5c1d_422f_80de_3b16fb8fb7b4.slice. May 15 12:41:10.346437 containerd[1570]: time="2025-05-15T12:41:10.346349333Z" level=info msg="CreateContainer within sandbox \"fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 12:41:10.348414 containerd[1570]: time="2025-05-15T12:41:10.348213386Z" level=info msg="RemoveContainer for \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\"" May 15 12:41:10.348568 containerd[1570]: time="2025-05-15T12:41:10.346820495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f8f4784b9-zjdjm,Uid:a945b43a-3538-4da2-a2be-980ba50aab2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9034d88ce754850e93d0772527b068551646dc9df59c3ac900eaf8eb12683267\"" May 15 12:41:10.351961 kubelet[2830]: E0515 12:41:10.351928 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:10.366402 containerd[1570]: time="2025-05-15T12:41:10.366359202Z" level=info msg="RemoveContainer for \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" returns successfully" May 15 12:41:10.367073 containerd[1570]: time="2025-05-15T12:41:10.367010569Z" level=info msg="Container 24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba: CDI devices from CRI Config.CDIDevices: []" May 15 12:41:10.370320 kubelet[2830]: I0515 12:41:10.370243 2830 scope.go:117] "RemoveContainer" containerID="ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a" May 15 12:41:10.371153 containerd[1570]: time="2025-05-15T12:41:10.371032800Z" level=error msg="ContainerStatus for \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\": not found" May 15 12:41:10.372155 kubelet[2830]: E0515 12:41:10.372080 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\": not found" containerID="ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a" May 15 12:41:10.372155 kubelet[2830]: I0515 12:41:10.372109 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a"} err="failed to get container status \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce2af80fcf814145a80b6f6ed5574fd6c62fc43152d3c9fcb14ee42e66d6cd1a\": not found" May 15 12:41:10.372155 kubelet[2830]: I0515 12:41:10.372129 2830 scope.go:117] "RemoveContainer" containerID="bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884" May 15 12:41:10.375194 containerd[1570]: time="2025-05-15T12:41:10.375114975Z" level=info msg="RemoveContainer for \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\"" May 15 12:41:10.377704 containerd[1570]: time="2025-05-15T12:41:10.377668095Z" level=info msg="CreateContainer within sandbox \"9034d88ce754850e93d0772527b068551646dc9df59c3ac900eaf8eb12683267\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 12:41:10.378669 containerd[1570]: time="2025-05-15T12:41:10.378635242Z" level=info msg="CreateContainer within sandbox \"fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba\"" May 15 12:41:10.379833 containerd[1570]: time="2025-05-15T12:41:10.379795431Z" level=info msg="StartContainer for \"24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba\"" May 15 12:41:10.380987 containerd[1570]: time="2025-05-15T12:41:10.380894675Z" level=info msg="connecting to shim 24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba" address="unix:///run/containerd/s/5eceda3b09bea853886d49d2a6de198f765604f66b7baf5efa7c6b12111073da" protocol=ttrpc version=3 May 15 12:41:10.385623 containerd[1570]: time="2025-05-15T12:41:10.385566353Z" level=info msg="RemoveContainer for \"bb47f88d23d868d5ebbe006bc9e7f942271143c5fb7c47820ae9bdcafc1fb884\" returns successfully" May 15 12:41:10.387155 containerd[1570]: time="2025-05-15T12:41:10.387136879Z" level=info msg="Container de9d1fb984eb111a8f41a884aa3e9e7010cd52393e061da1c75ec1b91d875e56: CDI devices from CRI Config.CDIDevices: []" May 15 12:41:10.394904 containerd[1570]: time="2025-05-15T12:41:10.394874737Z" level=info msg="CreateContainer within sandbox \"9034d88ce754850e93d0772527b068551646dc9df59c3ac900eaf8eb12683267\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"de9d1fb984eb111a8f41a884aa3e9e7010cd52393e061da1c75ec1b91d875e56\"" May 15 12:41:10.396223 containerd[1570]: time="2025-05-15T12:41:10.396143879Z" level=info msg="StartContainer for \"de9d1fb984eb111a8f41a884aa3e9e7010cd52393e061da1c75ec1b91d875e56\"" May 15 12:41:10.398486 containerd[1570]: time="2025-05-15T12:41:10.398193344Z" level=info msg="connecting to shim de9d1fb984eb111a8f41a884aa3e9e7010cd52393e061da1c75ec1b91d875e56" address="unix:///run/containerd/s/fb8f4ab1ac846f6bd197cb5c2bc03cf34f90e2e1d9ef302636632db6500c9d7f" protocol=ttrpc version=3 May 15 12:41:10.433554 systemd[1]: Started cri-containerd-de9d1fb984eb111a8f41a884aa3e9e7010cd52393e061da1c75ec1b91d875e56.scope - libcontainer container de9d1fb984eb111a8f41a884aa3e9e7010cd52393e061da1c75ec1b91d875e56. May 15 12:41:10.447617 systemd[1]: Started cri-containerd-24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba.scope - libcontainer container 24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba. May 15 12:41:10.543771 containerd[1570]: time="2025-05-15T12:41:10.543605757Z" level=info msg="StartContainer for \"24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba\" returns successfully" May 15 12:41:10.562304 containerd[1570]: time="2025-05-15T12:41:10.562257166Z" level=info msg="StartContainer for \"de9d1fb984eb111a8f41a884aa3e9e7010cd52393e061da1c75ec1b91d875e56\" returns successfully" May 15 12:41:10.885998 kubelet[2830]: E0515 12:41:10.885536 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:11.367784 containerd[1570]: time="2025-05-15T12:41:11.367642131Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 15 12:41:11.371564 systemd[1]: cri-containerd-24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba.scope: Deactivated successfully. May 15 12:41:11.371864 systemd[1]: cri-containerd-24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba.scope: Consumed 741ms CPU time, 59.2M memory peak, 31.6M read from disk. May 15 12:41:11.373000 kubelet[2830]: E0515 12:41:11.372768 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:11.376062 containerd[1570]: time="2025-05-15T12:41:11.376035253Z" level=info msg="received exit event container_id:\"24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba\" id:\"24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba\" pid:5712 exited_at:{seconds:1747312871 nanos:374223701}" May 15 12:41:11.376484 containerd[1570]: time="2025-05-15T12:41:11.376424149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba\" id:\"24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba\" pid:5712 exited_at:{seconds:1747312871 nanos:374223701}" May 15 12:41:11.377853 kubelet[2830]: E0515 12:41:11.377832 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:11.409554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24db0fdb283628ebe6ec0551d9ec1d71b2c76af596ad9d4df7fe115d5615a5ba-rootfs.mount: Deactivated successfully. May 15 12:41:11.416504 kubelet[2830]: I0515 12:41:11.416423 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f8f4784b9-zjdjm" podStartSLOduration=3.416399433 podStartE2EDuration="3.416399433s" podCreationTimestamp="2025-05-15 12:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:41:11.396113753 +0000 UTC m=+85.633352627" watchObservedRunningTime="2025-05-15 12:41:11.416399433 +0000 UTC m=+85.653638307" May 15 12:41:11.889348 kubelet[2830]: I0515 12:41:11.889284 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d" path="/var/lib/kubelet/pods/b4d5a2a6-1051-40f7-84e4-ce0a66d4b74d/volumes" May 15 12:41:11.890021 kubelet[2830]: I0515 12:41:11.889941 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef32b572-5c1d-422f-80de-3b16fb8fb7b4" path="/var/lib/kubelet/pods/ef32b572-5c1d-422f-80de-3b16fb8fb7b4/volumes" May 15 12:41:12.386876 kubelet[2830]: E0515 12:41:12.386799 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:12.389657 kubelet[2830]: E0515 12:41:12.388732 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:12.409209 containerd[1570]: time="2025-05-15T12:41:12.409161001Z" level=info msg="CreateContainer within sandbox \"fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 12:41:12.423341 containerd[1570]: time="2025-05-15T12:41:12.422360692Z" level=info msg="Container 520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6: CDI devices from CRI Config.CDIDevices: []" May 15 12:41:12.430876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412249392.mount: Deactivated successfully. May 15 12:41:12.436782 containerd[1570]: time="2025-05-15T12:41:12.436705748Z" level=info msg="CreateContainer within sandbox \"fe3b7e2a75cbbef5a1c2b805510ee41723a1954318840803e7d9b6669f77ae8b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6\"" May 15 12:41:12.437648 containerd[1570]: time="2025-05-15T12:41:12.437617776Z" level=info msg="StartContainer for \"520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6\"" May 15 12:41:12.439192 containerd[1570]: time="2025-05-15T12:41:12.439158607Z" level=info msg="connecting to shim 520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6" address="unix:///run/containerd/s/5eceda3b09bea853886d49d2a6de198f765604f66b7baf5efa7c6b12111073da" protocol=ttrpc version=3 May 15 12:41:12.459121 systemd[1]: Started cri-containerd-520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6.scope - libcontainer container 520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6. May 15 12:41:12.523000 containerd[1570]: time="2025-05-15T12:41:12.522922994Z" level=info msg="StartContainer for \"520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6\" returns successfully" May 15 12:41:12.610784 kubelet[2830]: I0515 12:41:12.610337 2830 topology_manager.go:215] "Topology Admit Handler" podUID="70cdad40-9dbf-46a6-97cd-bc1fc9f65d90" podNamespace="calico-system" podName="calico-kube-controllers-8685866b8f-5cgsg" May 15 12:41:12.619934 systemd[1]: Created slice kubepods-besteffort-pod70cdad40_9dbf_46a6_97cd_bc1fc9f65d90.slice - libcontainer container kubepods-besteffort-pod70cdad40_9dbf_46a6_97cd_bc1fc9f65d90.slice. May 15 12:41:12.704849 kubelet[2830]: I0515 12:41:12.704620 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70cdad40-9dbf-46a6-97cd-bc1fc9f65d90-tigera-ca-bundle\") pod \"calico-kube-controllers-8685866b8f-5cgsg\" (UID: \"70cdad40-9dbf-46a6-97cd-bc1fc9f65d90\") " pod="calico-system/calico-kube-controllers-8685866b8f-5cgsg" May 15 12:41:12.704849 kubelet[2830]: I0515 12:41:12.704672 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4dk\" (UniqueName: \"kubernetes.io/projected/70cdad40-9dbf-46a6-97cd-bc1fc9f65d90-kube-api-access-wf4dk\") pod \"calico-kube-controllers-8685866b8f-5cgsg\" (UID: \"70cdad40-9dbf-46a6-97cd-bc1fc9f65d90\") " pod="calico-system/calico-kube-controllers-8685866b8f-5cgsg" May 15 12:41:12.925138 containerd[1570]: time="2025-05-15T12:41:12.925011307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8685866b8f-5cgsg,Uid:70cdad40-9dbf-46a6-97cd-bc1fc9f65d90,Namespace:calico-system,Attempt:0,}" May 15 12:41:13.023707 systemd-networkd[1466]: calif0e1d90d350: Link UP May 15 12:41:13.024934 systemd-networkd[1466]: calif0e1d90d350: Gained carrier May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:12.958 [INFO][5818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0 calico-kube-controllers-8685866b8f- calico-system 70cdad40-9dbf-46a6-97cd-bc1fc9f65d90 1242 0 2025-05-15 12:41:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8685866b8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-125-189 calico-kube-controllers-8685866b8f-5cgsg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif0e1d90d350 [] []}} ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Namespace="calico-system" Pod="calico-kube-controllers-8685866b8f-5cgsg" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:12.958 [INFO][5818] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Namespace="calico-system" Pod="calico-kube-controllers-8685866b8f-5cgsg" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:12.982 [INFO][5831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" HandleID="k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Workload="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:12.992 [INFO][5831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" HandleID="k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Workload="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000120320), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-125-189", "pod":"calico-kube-controllers-8685866b8f-5cgsg", "timestamp":"2025-05-15 12:41:12.982741606 +0000 UTC"}, Hostname:"172-236-125-189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:12.992 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:12.992 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:12.992 [INFO][5831] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-125-189' May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:12.994 [INFO][5831] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.000 [INFO][5831] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.003 [INFO][5831] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.005 [INFO][5831] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.007 [INFO][5831] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.007 [INFO][5831] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.008 [INFO][5831] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31 May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.012 [INFO][5831] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.018 [INFO][5831] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.137/26] block=192.168.83.128/26 handle="k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.018 [INFO][5831] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.137/26] handle="k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" host="172-236-125-189" May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.018 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:13.040730 containerd[1570]: 2025-05-15 12:41:13.018 [INFO][5831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.137/26] IPv6=[] ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" HandleID="k8s-pod-network.0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Workload="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" May 15 12:41:13.043025 containerd[1570]: 2025-05-15 12:41:13.020 [INFO][5818] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Namespace="calico-system" Pod="calico-kube-controllers-8685866b8f-5cgsg" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0", GenerateName:"calico-kube-controllers-8685866b8f-", Namespace:"calico-system", SelfLink:"", UID:"70cdad40-9dbf-46a6-97cd-bc1fc9f65d90", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8685866b8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"", Pod:"calico-kube-controllers-8685866b8f-5cgsg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif0e1d90d350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:41:13.043025 containerd[1570]: 2025-05-15 12:41:13.021 [INFO][5818] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.137/32] ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Namespace="calico-system" Pod="calico-kube-controllers-8685866b8f-5cgsg" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" May 15 12:41:13.043025 containerd[1570]: 2025-05-15 12:41:13.021 [INFO][5818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0e1d90d350 ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Namespace="calico-system" Pod="calico-kube-controllers-8685866b8f-5cgsg" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" May 15 12:41:13.043025 containerd[1570]: 2025-05-15 12:41:13.025 [INFO][5818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Namespace="calico-system" Pod="calico-kube-controllers-8685866b8f-5cgsg" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" May 15 12:41:13.043025 containerd[1570]: 2025-05-15 12:41:13.027 [INFO][5818] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Namespace="calico-system" Pod="calico-kube-controllers-8685866b8f-5cgsg" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0", GenerateName:"calico-kube-controllers-8685866b8f-", Namespace:"calico-system", SelfLink:"", UID:"70cdad40-9dbf-46a6-97cd-bc1fc9f65d90", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8685866b8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-125-189", ContainerID:"0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31", Pod:"calico-kube-controllers-8685866b8f-5cgsg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif0e1d90d350", MAC:"2e:b9:bd:15:bb:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:41:13.043025 containerd[1570]: 2025-05-15 12:41:13.035 [INFO][5818] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" Namespace="calico-system" Pod="calico-kube-controllers-8685866b8f-5cgsg" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--8685866b8f--5cgsg-eth0" May 15 12:41:13.065613 containerd[1570]: time="2025-05-15T12:41:13.065475999Z" level=info msg="connecting to shim 0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31" address="unix:///run/containerd/s/11464ba0d4cf7b6a6714000dec817ac02cced6ab3394588c1fea662002d78066" namespace=k8s.io protocol=ttrpc version=3 May 15 12:41:13.091155 systemd[1]: Started cri-containerd-0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31.scope - libcontainer container 0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31. May 15 12:41:13.143608 containerd[1570]: time="2025-05-15T12:41:13.143559123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8685866b8f-5cgsg,Uid:70cdad40-9dbf-46a6-97cd-bc1fc9f65d90,Namespace:calico-system,Attempt:0,} returns sandbox id \"0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31\"" May 15 12:41:13.153778 containerd[1570]: time="2025-05-15T12:41:13.153393823Z" level=info msg="CreateContainer within sandbox \"0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 12:41:13.159466 containerd[1570]: time="2025-05-15T12:41:13.159436351Z" level=info msg="Container 83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c: CDI devices from CRI Config.CDIDevices: []" May 15 12:41:13.163834 containerd[1570]: time="2025-05-15T12:41:13.163795286Z" level=info msg="CreateContainer within sandbox \"0052f2110551841bea1c8d03c1bd0a66326ca2b199bf98c0724af5d3c0bc0b31\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\"" May 15 12:41:13.164440 containerd[1570]: time="2025-05-15T12:41:13.164419807Z" level=info msg="StartContainer for \"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\"" May 15 12:41:13.170001 containerd[1570]: time="2025-05-15T12:41:13.169322290Z" level=info msg="connecting to shim 83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c" address="unix:///run/containerd/s/11464ba0d4cf7b6a6714000dec817ac02cced6ab3394588c1fea662002d78066" protocol=ttrpc version=3 May 15 12:41:13.192289 systemd[1]: Started cri-containerd-83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c.scope - libcontainer container 83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c. May 15 12:41:13.251381 containerd[1570]: time="2025-05-15T12:41:13.251338844Z" level=info msg="StartContainer for \"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\" returns successfully" May 15 12:41:13.413485 kubelet[2830]: E0515 12:41:13.412503 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:13.413485 kubelet[2830]: E0515 12:41:13.412640 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:13.446204 kubelet[2830]: I0515 12:41:13.445192 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f4w5w" podStartSLOduration=4.445173146 podStartE2EDuration="4.445173146s" podCreationTimestamp="2025-05-15 12:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:41:13.444297652 +0000 UTC m=+87.681536526" watchObservedRunningTime="2025-05-15 12:41:13.445173146 +0000 UTC m=+87.682412020" May 15 12:41:13.446204 kubelet[2830]: I0515 12:41:13.445546 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8685866b8f-5cgsg" podStartSLOduration=3.445539664 podStartE2EDuration="3.445539664s" podCreationTimestamp="2025-05-15 12:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:41:13.415916371 +0000 UTC m=+87.653155245" watchObservedRunningTime="2025-05-15 12:41:13.445539664 +0000 UTC m=+87.682778538" May 15 12:41:13.469592 containerd[1570]: time="2025-05-15T12:41:13.469548774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\" id:\"f628de3cfdcb9d30d133f6f7920f34fe47d72e29c647c84c317b4d3411232880\" pid:5940 exit_status:1 exited_at:{seconds:1747312873 nanos:469078869}" May 15 12:41:14.491541 containerd[1570]: time="2025-05-15T12:41:14.491483960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\" id:\"6ad4d354228db5450a097a66292f04137968fedb533ab9a2cc7d4a9be924e91f\" pid:6116 exit_status:1 exited_at:{seconds:1747312874 nanos:491074798}" May 15 12:41:14.806313 systemd-networkd[1466]: calif0e1d90d350: Gained IPv6LL May 15 12:41:14.885716 kubelet[2830]: E0515 12:41:14.885611 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:19.885934 kubelet[2830]: E0515 12:41:19.885876 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:39.586028 kubelet[2830]: E0515 12:41:39.584955 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:39.692871 containerd[1570]: time="2025-05-15T12:41:39.692810335Z" level=info msg="TaskExit event in podsandbox handler container_id:\"520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6\" id:\"f6d959d48f08088b6fa3e91672c3e3a72ddfbae83696d1e8266060f207988e3b\" pid:6198 exited_at:{seconds:1747312899 nanos:692159030}" May 15 12:41:39.795489 containerd[1570]: time="2025-05-15T12:41:39.795414478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6\" id:\"af0db6c6c54eec972322020ea92ec9e38038caa3c96022f9154856d4a3b32e4d\" pid:6224 exited_at:{seconds:1747312899 nanos:794868897}" May 15 12:41:40.482509 kubelet[2830]: E0515 12:41:40.482441 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:42.984886 containerd[1570]: time="2025-05-15T12:41:42.984789915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\" id:\"737dc54329a0ec2833752810a274596d6212d884010d0c2ad65069a13be2ffd7\" pid:6248 exited_at:{seconds:1747312902 nanos:983912107}" May 15 12:41:45.905814 kubelet[2830]: I0515 12:41:45.905236 2830 scope.go:117] "RemoveContainer" containerID="fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58" May 15 12:41:45.910184 containerd[1570]: time="2025-05-15T12:41:45.910124843Z" level=info msg="RemoveContainer for \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\"" May 15 12:41:45.917561 containerd[1570]: time="2025-05-15T12:41:45.917475777Z" level=info msg="RemoveContainer for \"fed7eecbb666220a7f9e47c38d8e9e2159e339e9412e4f341cd255e0edad9e58\" returns successfully" May 15 12:41:45.919701 containerd[1570]: time="2025-05-15T12:41:45.919590872Z" level=info msg="StopPodSandbox for \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\"" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.031 [WARNING][6272] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.032 [INFO][6272] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.032 [INFO][6272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" iface="eth0" netns="" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.032 [INFO][6272] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.032 [INFO][6272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.072 [INFO][6279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.073 [INFO][6279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.073 [INFO][6279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.082 [WARNING][6279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.082 [INFO][6279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.084 [INFO][6279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:46.088912 containerd[1570]: 2025-05-15 12:41:46.086 [INFO][6272] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:46.088912 containerd[1570]: time="2025-05-15T12:41:46.088865069Z" level=info msg="TearDown network for sandbox \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" successfully" May 15 12:41:46.088912 containerd[1570]: time="2025-05-15T12:41:46.088884438Z" level=info msg="StopPodSandbox for \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" returns successfully" May 15 12:41:46.089701 containerd[1570]: time="2025-05-15T12:41:46.089669651Z" level=info msg="RemovePodSandbox for \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\"" May 15 12:41:46.089750 containerd[1570]: time="2025-05-15T12:41:46.089708699Z" level=info msg="Forcibly stopping sandbox \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\"" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.130 [WARNING][6298] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" WorkloadEndpoint="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.130 [INFO][6298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.130 [INFO][6298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" iface="eth0" netns="" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.130 [INFO][6298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.130 [INFO][6298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.159 [INFO][6306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.159 [INFO][6306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.160 [INFO][6306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.165 [WARNING][6306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.165 [INFO][6306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" HandleID="k8s-pod-network.5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" Workload="172--236--125--189-k8s-calico--kube--controllers--b4bb544b7--zbnfw-eth0" May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.166 [INFO][6306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:46.170930 containerd[1570]: 2025-05-15 12:41:46.168 [INFO][6298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9" May 15 12:41:46.173041 containerd[1570]: time="2025-05-15T12:41:46.172071470Z" level=info msg="TearDown network for sandbox \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" successfully" May 15 12:41:46.177818 containerd[1570]: time="2025-05-15T12:41:46.177778853Z" level=info msg="Ensure that sandbox 5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9 in task-service has been cleanup successfully" May 15 12:41:46.181276 containerd[1570]: time="2025-05-15T12:41:46.181235991Z" level=info msg="RemovePodSandbox \"5379b2ac23b2a1ea32c6cc4705e1c105234e0ab7b66afc66f1415997fe8905b9\" returns successfully" May 15 12:41:46.183467 containerd[1570]: time="2025-05-15T12:41:46.183434328Z" level=info msg="StopPodSandbox for \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\"" May 15 12:41:46.183908 containerd[1570]: time="2025-05-15T12:41:46.183853189Z" level=info msg="TearDown network for sandbox \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" successfully" May 15 12:41:46.183908 containerd[1570]: time="2025-05-15T12:41:46.183871478Z" level=info msg="StopPodSandbox for \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" returns successfully" May 15 12:41:46.184494 containerd[1570]: time="2025-05-15T12:41:46.184453671Z" level=info msg="RemovePodSandbox for \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\"" May 15 12:41:46.184691 containerd[1570]: time="2025-05-15T12:41:46.184478660Z" level=info msg="Forcibly stopping sandbox \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\"" May 15 12:41:46.184781 containerd[1570]: time="2025-05-15T12:41:46.184720878Z" level=info msg="TearDown network for sandbox \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" successfully" May 15 12:41:46.187119 containerd[1570]: time="2025-05-15T12:41:46.187096767Z" level=info msg="Ensure that sandbox f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a in task-service has been cleanup successfully" May 15 12:41:46.189038 containerd[1570]: time="2025-05-15T12:41:46.188932231Z" level=info msg="RemovePodSandbox \"f992e7c9d9f1cfb2dca13bf963e18c22e2e42d96c5fca9fad70dbe453ded623a\" returns successfully" May 15 12:41:46.189376 containerd[1570]: time="2025-05-15T12:41:46.189358202Z" level=info msg="StopPodSandbox for \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\"" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.235 [WARNING][6325] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.235 [INFO][6325] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.235 [INFO][6325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" iface="eth0" netns="" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.235 [INFO][6325] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.235 [INFO][6325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.269 [INFO][6332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.269 [INFO][6332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.270 [INFO][6332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.276 [WARNING][6332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.276 [INFO][6332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.278 [INFO][6332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:46.283471 containerd[1570]: 2025-05-15 12:41:46.281 [INFO][6325] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:46.284641 containerd[1570]: time="2025-05-15T12:41:46.283549759Z" level=info msg="TearDown network for sandbox \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" successfully" May 15 12:41:46.284641 containerd[1570]: time="2025-05-15T12:41:46.283580727Z" level=info msg="StopPodSandbox for \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" returns successfully" May 15 12:41:46.284641 containerd[1570]: time="2025-05-15T12:41:46.284209868Z" level=info msg="RemovePodSandbox for \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\"" May 15 12:41:46.284641 containerd[1570]: time="2025-05-15T12:41:46.284238737Z" level=info msg="Forcibly stopping sandbox \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\"" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.328 [WARNING][6351] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.328 [INFO][6351] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.328 [INFO][6351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" iface="eth0" netns="" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.328 [INFO][6351] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.328 [INFO][6351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.361 [INFO][6358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.362 [INFO][6358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.362 [INFO][6358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.370 [WARNING][6358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.370 [INFO][6358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" HandleID="k8s-pod-network.d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--mxm9v-eth0" May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.372 [INFO][6358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:46.383938 containerd[1570]: 2025-05-15 12:41:46.377 [INFO][6351] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d" May 15 12:41:46.383938 containerd[1570]: time="2025-05-15T12:41:46.381529559Z" level=info msg="TearDown network for sandbox \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" successfully" May 15 12:41:46.386282 containerd[1570]: time="2025-05-15T12:41:46.386246559Z" level=info msg="Ensure that sandbox d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d in task-service has been cleanup successfully" May 15 12:41:46.388514 containerd[1570]: time="2025-05-15T12:41:46.388492754Z" level=info msg="RemovePodSandbox \"d2d4e7036a63651a965aa6accf5e121eeee0d7127fedf7cc79343c9cd5b1752d\" returns successfully" May 15 12:41:46.393084 containerd[1570]: time="2025-05-15T12:41:46.393050811Z" level=info msg="StopPodSandbox for \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\"" May 15 12:41:46.393501 containerd[1570]: time="2025-05-15T12:41:46.393379776Z" level=info msg="TearDown network for sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" successfully" May 15 12:41:46.393501 containerd[1570]: time="2025-05-15T12:41:46.393420734Z" level=info msg="StopPodSandbox for \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" returns successfully" May 15 12:41:46.394253 containerd[1570]: time="2025-05-15T12:41:46.394232836Z" level=info msg="RemovePodSandbox for \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\"" May 15 12:41:46.394322 containerd[1570]: time="2025-05-15T12:41:46.394255415Z" level=info msg="Forcibly stopping sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\"" May 15 12:41:46.394363 containerd[1570]: time="2025-05-15T12:41:46.394345900Z" level=info msg="TearDown network for sandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" successfully" May 15 12:41:46.397701 containerd[1570]: time="2025-05-15T12:41:46.397675815Z" level=info msg="Ensure that sandbox 8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf in task-service has been cleanup successfully" May 15 12:41:46.399692 containerd[1570]: time="2025-05-15T12:41:46.399664192Z" level=info msg="RemovePodSandbox \"8abd2a4186fac809bfb50819f8f51048f513f098c45622e059d9de857ea226cf\" returns successfully" May 15 12:41:46.400221 containerd[1570]: time="2025-05-15T12:41:46.400198087Z" level=info msg="StopPodSandbox for \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\"" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.476 [WARNING][6376] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.476 [INFO][6376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.476 [INFO][6376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" iface="eth0" netns="" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.476 [INFO][6376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.476 [INFO][6376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.502 [INFO][6383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.504 [INFO][6383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.504 [INFO][6383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.510 [WARNING][6383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.510 [INFO][6383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.512 [INFO][6383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:46.516819 containerd[1570]: 2025-05-15 12:41:46.514 [INFO][6376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:46.516819 containerd[1570]: time="2025-05-15T12:41:46.516455423Z" level=info msg="TearDown network for sandbox \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" successfully" May 15 12:41:46.516819 containerd[1570]: time="2025-05-15T12:41:46.516480342Z" level=info msg="StopPodSandbox for \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" returns successfully" May 15 12:41:46.517658 containerd[1570]: time="2025-05-15T12:41:46.517595970Z" level=info msg="RemovePodSandbox for \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\"" May 15 12:41:46.517658 containerd[1570]: time="2025-05-15T12:41:46.517628228Z" level=info msg="Forcibly stopping sandbox \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\"" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.583 [WARNING][6401] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" WorkloadEndpoint="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.583 [INFO][6401] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.583 [INFO][6401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" iface="eth0" netns="" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.583 [INFO][6401] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.583 [INFO][6401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.609 [INFO][6408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.609 [INFO][6408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.609 [INFO][6408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.617 [WARNING][6408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.618 [INFO][6408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" HandleID="k8s-pod-network.fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" Workload="172--236--125--189-k8s-calico--apiserver--78b5784dc8--8lbpp-eth0" May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.621 [INFO][6408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:41:46.628570 containerd[1570]: 2025-05-15 12:41:46.624 [INFO][6401] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a" May 15 12:41:46.629286 containerd[1570]: time="2025-05-15T12:41:46.628596681Z" level=info msg="TearDown network for sandbox \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" successfully" May 15 12:41:46.633730 containerd[1570]: time="2025-05-15T12:41:46.633407426Z" level=info msg="Ensure that sandbox fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a in task-service has been cleanup successfully" May 15 12:41:46.635555 containerd[1570]: time="2025-05-15T12:41:46.635523708Z" level=info msg="RemovePodSandbox \"fa02e25e87b15e2f48a92a373c9456feb99eee2b757d3822e3fb67ac0ecb034a\" returns successfully" May 15 12:41:48.300726 systemd[1]: Started sshd@8-172.236.125.189:22-139.178.89.65:50414.service - OpenSSH per-connection server daemon (139.178.89.65:50414). May 15 12:41:48.643936 sshd[6419]: Accepted publickey for core from 139.178.89.65 port 50414 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:41:48.647153 sshd-session[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:41:48.654379 systemd-logind[1541]: New session 8 of user core. May 15 12:41:48.659225 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 12:41:49.006516 sshd[6421]: Connection closed by 139.178.89.65 port 50414 May 15 12:41:49.007352 sshd-session[6419]: pam_unix(sshd:session): session closed for user core May 15 12:41:49.011745 systemd[1]: sshd@8-172.236.125.189:22-139.178.89.65:50414.service: Deactivated successfully. May 15 12:41:49.022335 systemd[1]: session-8.scope: Deactivated successfully. May 15 12:41:49.024087 systemd-logind[1541]: Session 8 logged out. Waiting for processes to exit. May 15 12:41:49.025641 systemd-logind[1541]: Removed session 8. May 15 12:41:54.073748 systemd[1]: Started sshd@9-172.236.125.189:22-139.178.89.65:50422.service - OpenSSH per-connection server daemon (139.178.89.65:50422). May 15 12:41:54.417107 sshd[6434]: Accepted publickey for core from 139.178.89.65 port 50422 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:41:54.419113 sshd-session[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:41:54.427173 systemd-logind[1541]: New session 9 of user core. May 15 12:41:54.433141 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 12:41:54.724022 sshd[6436]: Connection closed by 139.178.89.65 port 50422 May 15 12:41:54.724904 sshd-session[6434]: pam_unix(sshd:session): session closed for user core May 15 12:41:54.729429 systemd[1]: sshd@9-172.236.125.189:22-139.178.89.65:50422.service: Deactivated successfully. May 15 12:41:54.731895 systemd[1]: session-9.scope: Deactivated successfully. May 15 12:41:54.733316 systemd-logind[1541]: Session 9 logged out. Waiting for processes to exit. May 15 12:41:54.734906 systemd-logind[1541]: Removed session 9. May 15 12:41:56.289855 systemd[1]: Started sshd@10-172.236.125.189:22-218.92.0.215:60986.service - OpenSSH per-connection server daemon (218.92.0.215:60986). May 15 12:41:56.538025 sshd[6457]: Unable to negotiate with 218.92.0.215 port 60986: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] May 15 12:41:56.540326 systemd[1]: sshd@10-172.236.125.189:22-218.92.0.215:60986.service: Deactivated successfully. May 15 12:41:56.886709 kubelet[2830]: E0515 12:41:56.886093 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:56.886709 kubelet[2830]: E0515 12:41:56.886124 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:41:59.791119 systemd[1]: Started sshd@11-172.236.125.189:22-139.178.89.65:40626.service - OpenSSH per-connection server daemon (139.178.89.65:40626). May 15 12:42:00.131948 sshd[6462]: Accepted publickey for core from 139.178.89.65 port 40626 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:00.133591 sshd-session[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:00.139026 systemd-logind[1541]: New session 10 of user core. May 15 12:42:00.145105 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 12:42:00.462232 sshd[6464]: Connection closed by 139.178.89.65 port 40626 May 15 12:42:00.463633 sshd-session[6462]: pam_unix(sshd:session): session closed for user core May 15 12:42:00.469340 systemd-logind[1541]: Session 10 logged out. Waiting for processes to exit. May 15 12:42:00.469919 systemd[1]: sshd@11-172.236.125.189:22-139.178.89.65:40626.service: Deactivated successfully. May 15 12:42:00.472714 systemd[1]: session-10.scope: Deactivated successfully. May 15 12:42:00.475998 systemd-logind[1541]: Removed session 10. May 15 12:42:00.527798 systemd[1]: Started sshd@12-172.236.125.189:22-139.178.89.65:40630.service - OpenSSH per-connection server daemon (139.178.89.65:40630). May 15 12:42:00.872106 sshd[6477]: Accepted publickey for core from 139.178.89.65 port 40630 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:00.873557 sshd-session[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:00.880210 systemd-logind[1541]: New session 11 of user core. May 15 12:42:00.886128 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 12:42:01.213305 sshd[6479]: Connection closed by 139.178.89.65 port 40630 May 15 12:42:01.214001 sshd-session[6477]: pam_unix(sshd:session): session closed for user core May 15 12:42:01.218145 systemd-logind[1541]: Session 11 logged out. Waiting for processes to exit. May 15 12:42:01.219140 systemd[1]: sshd@12-172.236.125.189:22-139.178.89.65:40630.service: Deactivated successfully. May 15 12:42:01.221139 systemd[1]: session-11.scope: Deactivated successfully. May 15 12:42:01.222903 systemd-logind[1541]: Removed session 11. May 15 12:42:01.277520 systemd[1]: Started sshd@13-172.236.125.189:22-139.178.89.65:40634.service - OpenSSH per-connection server daemon (139.178.89.65:40634). May 15 12:42:01.619739 sshd[6489]: Accepted publickey for core from 139.178.89.65 port 40634 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:01.622633 sshd-session[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:01.629878 systemd-logind[1541]: New session 12 of user core. May 15 12:42:01.635158 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 12:42:01.944127 sshd[6494]: Connection closed by 139.178.89.65 port 40634 May 15 12:42:01.945071 sshd-session[6489]: pam_unix(sshd:session): session closed for user core May 15 12:42:01.951227 systemd-logind[1541]: Session 12 logged out. Waiting for processes to exit. May 15 12:42:01.951560 systemd[1]: sshd@13-172.236.125.189:22-139.178.89.65:40634.service: Deactivated successfully. May 15 12:42:01.955999 systemd[1]: session-12.scope: Deactivated successfully. May 15 12:42:01.958570 systemd-logind[1541]: Removed session 12. May 15 12:42:07.011997 systemd[1]: Started sshd@14-172.236.125.189:22-139.178.89.65:44408.service - OpenSSH per-connection server daemon (139.178.89.65:44408). May 15 12:42:07.357602 sshd[6511]: Accepted publickey for core from 139.178.89.65 port 44408 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:07.359306 sshd-session[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:07.364023 systemd-logind[1541]: New session 13 of user core. May 15 12:42:07.369115 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 12:42:07.673050 sshd[6513]: Connection closed by 139.178.89.65 port 44408 May 15 12:42:07.674247 sshd-session[6511]: pam_unix(sshd:session): session closed for user core May 15 12:42:07.679066 systemd-logind[1541]: Session 13 logged out. Waiting for processes to exit. May 15 12:42:07.680050 systemd[1]: sshd@14-172.236.125.189:22-139.178.89.65:44408.service: Deactivated successfully. May 15 12:42:07.682451 systemd[1]: session-13.scope: Deactivated successfully. May 15 12:42:07.686465 systemd-logind[1541]: Removed session 13. May 15 12:42:09.677930 containerd[1570]: time="2025-05-15T12:42:09.677823465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6\" id:\"f06feeb8b69fdaf1e7d4477ec317936f4fb1851bbe8e2b6446b2eb00dc0bdf05\" pid:6537 exited_at:{seconds:1747312929 nanos:677177203}" May 15 12:42:12.738295 systemd[1]: Started sshd@15-172.236.125.189:22-139.178.89.65:44420.service - OpenSSH per-connection server daemon (139.178.89.65:44420). May 15 12:42:12.886417 kubelet[2830]: E0515 12:42:12.886350 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:42:13.022174 containerd[1570]: time="2025-05-15T12:42:13.022051282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\" id:\"ee0e89dfd9ad130100562300adec5e235403ddfeacddf26f2d4dba3df07839d1\" pid:6584 exited_at:{seconds:1747312933 nanos:21779569}" May 15 12:42:13.023278 containerd[1570]: time="2025-05-15T12:42:13.023209681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\" id:\"ca334a79af0759d07623ccf97312fb6a126362f706f0ff01bd477111c9d292ed\" pid:6582 exited_at:{seconds:1747312933 nanos:22073512}" May 15 12:42:13.091892 sshd[6549]: Accepted publickey for core from 139.178.89.65 port 44420 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:13.093651 sshd-session[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:13.101269 systemd-logind[1541]: New session 14 of user core. May 15 12:42:13.108185 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 12:42:13.416754 sshd[6601]: Connection closed by 139.178.89.65 port 44420 May 15 12:42:13.418607 sshd-session[6549]: pam_unix(sshd:session): session closed for user core May 15 12:42:13.425759 systemd[1]: sshd@15-172.236.125.189:22-139.178.89.65:44420.service: Deactivated successfully. May 15 12:42:13.428327 systemd[1]: session-14.scope: Deactivated successfully. May 15 12:42:13.429403 systemd-logind[1541]: Session 14 logged out. Waiting for processes to exit. May 15 12:42:13.431859 systemd-logind[1541]: Removed session 14. May 15 12:42:18.477660 systemd[1]: Started sshd@16-172.236.125.189:22-139.178.89.65:39448.service - OpenSSH per-connection server daemon (139.178.89.65:39448). May 15 12:42:18.820557 sshd[6614]: Accepted publickey for core from 139.178.89.65 port 39448 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:18.822701 sshd-session[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:18.829082 systemd-logind[1541]: New session 15 of user core. May 15 12:42:18.832123 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 12:42:19.138530 sshd[6616]: Connection closed by 139.178.89.65 port 39448 May 15 12:42:19.139417 sshd-session[6614]: pam_unix(sshd:session): session closed for user core May 15 12:42:19.144038 systemd-logind[1541]: Session 15 logged out. Waiting for processes to exit. May 15 12:42:19.144782 systemd[1]: sshd@16-172.236.125.189:22-139.178.89.65:39448.service: Deactivated successfully. May 15 12:42:19.147609 systemd[1]: session-15.scope: Deactivated successfully. May 15 12:42:19.150907 systemd-logind[1541]: Removed session 15. May 15 12:42:21.885995 kubelet[2830]: E0515 12:42:21.885343 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:42:24.207179 systemd[1]: Started sshd@17-172.236.125.189:22-139.178.89.65:39460.service - OpenSSH per-connection server daemon (139.178.89.65:39460). May 15 12:42:24.555502 sshd[6629]: Accepted publickey for core from 139.178.89.65 port 39460 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:24.557409 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:24.565054 systemd-logind[1541]: New session 16 of user core. May 15 12:42:24.571233 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 12:42:24.880168 sshd[6631]: Connection closed by 139.178.89.65 port 39460 May 15 12:42:24.880639 sshd-session[6629]: pam_unix(sshd:session): session closed for user core May 15 12:42:24.887873 systemd[1]: sshd@17-172.236.125.189:22-139.178.89.65:39460.service: Deactivated successfully. May 15 12:42:24.890926 systemd[1]: session-16.scope: Deactivated successfully. May 15 12:42:24.893865 systemd-logind[1541]: Session 16 logged out. Waiting for processes to exit. May 15 12:42:24.895747 systemd-logind[1541]: Removed session 16. May 15 12:42:24.940907 systemd[1]: Started sshd@18-172.236.125.189:22-139.178.89.65:39476.service - OpenSSH per-connection server daemon (139.178.89.65:39476). May 15 12:42:25.275386 sshd[6643]: Accepted publickey for core from 139.178.89.65 port 39476 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:25.278357 sshd-session[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:25.285904 systemd-logind[1541]: New session 17 of user core. May 15 12:42:25.290125 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 12:42:25.761777 sshd[6645]: Connection closed by 139.178.89.65 port 39476 May 15 12:42:25.763592 sshd-session[6643]: pam_unix(sshd:session): session closed for user core May 15 12:42:25.769116 systemd[1]: sshd@18-172.236.125.189:22-139.178.89.65:39476.service: Deactivated successfully. May 15 12:42:25.774194 systemd[1]: session-17.scope: Deactivated successfully. May 15 12:42:25.776446 systemd-logind[1541]: Session 17 logged out. Waiting for processes to exit. May 15 12:42:25.778079 systemd-logind[1541]: Removed session 17. May 15 12:42:25.825160 systemd[1]: Started sshd@19-172.236.125.189:22-139.178.89.65:39478.service - OpenSSH per-connection server daemon (139.178.89.65:39478). May 15 12:42:26.166203 sshd[6654]: Accepted publickey for core from 139.178.89.65 port 39478 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:26.168313 sshd-session[6654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:26.174028 systemd-logind[1541]: New session 18 of user core. May 15 12:42:26.180097 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 12:42:28.038345 sshd[6656]: Connection closed by 139.178.89.65 port 39478 May 15 12:42:28.039273 sshd-session[6654]: pam_unix(sshd:session): session closed for user core May 15 12:42:28.044993 systemd[1]: sshd@19-172.236.125.189:22-139.178.89.65:39478.service: Deactivated successfully. May 15 12:42:28.051509 systemd[1]: session-18.scope: Deactivated successfully. May 15 12:42:28.052193 systemd[1]: session-18.scope: Consumed 572ms CPU time, 72.9M memory peak. May 15 12:42:28.055891 systemd-logind[1541]: Session 18 logged out. Waiting for processes to exit. May 15 12:42:28.059012 systemd-logind[1541]: Removed session 18. May 15 12:42:28.098576 systemd[1]: Started sshd@20-172.236.125.189:22-139.178.89.65:58988.service - OpenSSH per-connection server daemon (139.178.89.65:58988). May 15 12:42:28.433942 sshd[6669]: Accepted publickey for core from 139.178.89.65 port 58988 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:28.436095 sshd-session[6669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:28.444902 systemd-logind[1541]: New session 19 of user core. May 15 12:42:28.449218 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 12:42:28.865245 sshd[6675]: Connection closed by 139.178.89.65 port 58988 May 15 12:42:28.866218 sshd-session[6669]: pam_unix(sshd:session): session closed for user core May 15 12:42:28.870813 systemd[1]: sshd@20-172.236.125.189:22-139.178.89.65:58988.service: Deactivated successfully. May 15 12:42:28.873367 systemd[1]: session-19.scope: Deactivated successfully. May 15 12:42:28.874784 systemd-logind[1541]: Session 19 logged out. Waiting for processes to exit. May 15 12:42:28.876165 systemd-logind[1541]: Removed session 19. May 15 12:42:28.929908 systemd[1]: Started sshd@21-172.236.125.189:22-139.178.89.65:59004.service - OpenSSH per-connection server daemon (139.178.89.65:59004). May 15 12:42:29.278915 sshd[6685]: Accepted publickey for core from 139.178.89.65 port 59004 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:29.280726 sshd-session[6685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:29.287057 systemd-logind[1541]: New session 20 of user core. May 15 12:42:29.297119 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 12:42:29.603245 sshd[6687]: Connection closed by 139.178.89.65 port 59004 May 15 12:42:29.604473 sshd-session[6685]: pam_unix(sshd:session): session closed for user core May 15 12:42:29.608896 systemd[1]: sshd@21-172.236.125.189:22-139.178.89.65:59004.service: Deactivated successfully. May 15 12:42:29.611331 systemd[1]: session-20.scope: Deactivated successfully. May 15 12:42:29.612669 systemd-logind[1541]: Session 20 logged out. Waiting for processes to exit. May 15 12:42:29.614061 systemd-logind[1541]: Removed session 20. May 15 12:42:34.661985 systemd[1]: Started sshd@22-172.236.125.189:22-139.178.89.65:59014.service - OpenSSH per-connection server daemon (139.178.89.65:59014). May 15 12:42:35.004796 sshd[6712]: Accepted publickey for core from 139.178.89.65 port 59014 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:35.006395 sshd-session[6712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:35.011732 systemd-logind[1541]: New session 21 of user core. May 15 12:42:35.020086 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 12:42:35.304081 sshd[6714]: Connection closed by 139.178.89.65 port 59014 May 15 12:42:35.305048 sshd-session[6712]: pam_unix(sshd:session): session closed for user core May 15 12:42:35.309779 systemd[1]: sshd@22-172.236.125.189:22-139.178.89.65:59014.service: Deactivated successfully. May 15 12:42:35.311912 systemd[1]: session-21.scope: Deactivated successfully. May 15 12:42:35.313716 systemd-logind[1541]: Session 21 logged out. Waiting for processes to exit. May 15 12:42:35.315337 systemd-logind[1541]: Removed session 21. May 15 12:42:39.659603 containerd[1570]: time="2025-05-15T12:42:39.659558192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"520fe2a0171e01ebb811af5ea3ed371a58a77d9d3df9fe7d96b7c75071e57be6\" id:\"b554f93333114fc0e3055aec26cef4e606890618ca05032aba734cbf4c1bedc8\" pid:6736 exited_at:{seconds:1747312959 nanos:659202361}" May 15 12:42:40.368021 systemd[1]: Started sshd@23-172.236.125.189:22-139.178.89.65:42102.service - OpenSSH per-connection server daemon (139.178.89.65:42102). May 15 12:42:40.707327 sshd[6750]: Accepted publickey for core from 139.178.89.65 port 42102 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:40.708877 sshd-session[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:40.714566 systemd-logind[1541]: New session 22 of user core. May 15 12:42:40.720104 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 12:42:40.886131 kubelet[2830]: E0515 12:42:40.886010 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:42:41.008679 sshd[6752]: Connection closed by 139.178.89.65 port 42102 May 15 12:42:41.010148 sshd-session[6750]: pam_unix(sshd:session): session closed for user core May 15 12:42:41.014326 systemd-logind[1541]: Session 22 logged out. Waiting for processes to exit. May 15 12:42:41.015061 systemd[1]: sshd@23-172.236.125.189:22-139.178.89.65:42102.service: Deactivated successfully. May 15 12:42:41.017855 systemd[1]: session-22.scope: Deactivated successfully. May 15 12:42:41.020624 systemd-logind[1541]: Removed session 22. May 15 12:42:41.886191 kubelet[2830]: E0515 12:42:41.885602 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:42:42.991519 containerd[1570]: time="2025-05-15T12:42:42.991430939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83748827b7bdc0e3166c0530323687956b6fcae4b59154a076226251be51399c\" id:\"c649dc7647724561d6d61ef9250dbc12040882f8f3bf4e1eeef831ed9b293b38\" pid:6775 exited_at:{seconds:1747312962 nanos:990264021}" May 15 12:42:43.886340 kubelet[2830]: E0515 12:42:43.886221 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 12:42:46.073674 systemd[1]: Started sshd@24-172.236.125.189:22-139.178.89.65:42108.service - OpenSSH per-connection server daemon (139.178.89.65:42108). May 15 12:42:46.428111 sshd[6792]: Accepted publickey for core from 139.178.89.65 port 42108 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:46.430426 sshd-session[6792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:46.436903 systemd-logind[1541]: New session 23 of user core. May 15 12:42:46.442162 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 12:42:46.770273 sshd[6795]: Connection closed by 139.178.89.65 port 42108 May 15 12:42:46.771107 sshd-session[6792]: pam_unix(sshd:session): session closed for user core May 15 12:42:46.776279 systemd-logind[1541]: Session 23 logged out. Waiting for processes to exit. May 15 12:42:46.776896 systemd[1]: sshd@24-172.236.125.189:22-139.178.89.65:42108.service: Deactivated successfully. May 15 12:42:46.779425 systemd[1]: session-23.scope: Deactivated successfully. May 15 12:42:46.782391 systemd-logind[1541]: Removed session 23.